Masterarbeit Bauer Paulus

Department of Physics and Astronomy
University of Heidelberg
Master thesis
in Physics
submitted by
Paulus Salomon Bauer
born in Salzburg
2013
Development of an imaging polarimeter
for water wave slope measurements
This Master thesis has been carried out by Paulus Salomon Bauer
at the
Institute of Environmental Physics
under the supervision of
Prof. Dr. Bernd Jähne
Abstract:
An imaging polarimetric slope sensing instrument for measuring water waves has been developed. From the measurement of the intensities of three different linear polarization states,
it is possible to determine the first three components of the polarization Stokes vector. The
slope of the water surface is computed from the measurement of the polarization of reflected
light. Unlike in common polarimeters, custom optics are not required in this simple setup
consisting of three cameras aligned in parallel and each equipped with a standard polarization filter. The trade-off for the simple setup is the need for more extensive system calibration and image post-processing. The camera setup was fully calibrated (extrinsic, intrinsic,
and distortion parameters) with a specialized calibration procedure using a custom built target. The analyzer matrix, which transforms the intensities of the measured polarization states
into the Stokes vector components was determined and verified experimentally. A data set
collected during an experiment on board the research vessel Meteor is analyzed. In an experiment at the Hamburgische Schiffsbau-Versuchsanstalt, the polarimeter was successfully
operated under laboratory conditions. It is shown to be capable of measuring the slope distribution of mechanically generated waves. Elevation power spectra, determined by integration
of the slope measurements, show good agreement with reference measurements with a wave
wire. Deviations at low wave frequencies due the small size of the polarimeter footprint can
be compensated with a transfer function that is derived from the measurements.
Zusammenfassung:
Ein bildgebendes polarimetrisches Instrument zur Messung der Neigung von Wasserwellen
wurde entwickelt. Aus der Messung von drei linearen Polarisationszuständen werden
die ersten drei Komponenten des Stokes-Vektors bestimmt. Die Neigung der Wasseroberfläche wird aus der Messung der Polarisation von reflektiertem Licht berechnet. Im
Gegensatz zu herkömmlichen Polarimetern werden keine spezialisierten optischen Elemente
benötigt, es kommt ein einfacher Aufbau aus drei parallel angeordneten Kameras mit Polarisationsfiltern zum Einsatz. Ein Mehraufwand in der Kalibrierung und Bildverarbeitung
wird dabei in Kauf genommen. Der Kamera-Aufbau wurde mit einer dafür entwickelten
Methode und einem eigens angefertigten Kalibriertarget kalibriert (extrinsische, intrinsische und Verzerrungs-Parameter). Die Analyse-Matrix, die die gemessenen Polarisationszustände in die Komponenten des Stokes-Vektors überführt, wurde bestimmt und experimentell verifiziert. Ein Datensatz, der während eines Experiments an Bord des Forschungsschiffs Meteor aufgenommen wurde, wurde analysiert. In einem Experiment an der Hamburgische Schiffsbau-Versuchsanstalt wurde das Polarimeter erfolgreich unter Laborbedingungen eingesetzt. Seine Fähigkeit, die Neigungsverteilung mechanisch erzeugter Wellen
zu messen, wird demonstriert. Aus der Integration der Neigungsmessungen gewonnene
Wellenhöhen-Leistungsspektren stimmen gut mit Referenzmessungen mit einem Wellendraht überein. Abweichungen, die bei niedrigen Frequenzen durch die kleine Größe des
Polarimeter-Messfelds auftreten, können durch eine experimentell bestimmte Transferfunktion korrigiert werden.
Contents
1. Introduction
1
2. Theory
3
2.1. The electrodynamics of continuous media . . . . . .
2.1.1. Maxwell’s equations . . . . . . . . . . . . . . .
2.1.2. The electromagnetic wave equations or Light
2.1.3. Electromagnetic fields in matter . . . . . . . .
2.1.4. Electromagnetic Fields at a boundary . . . . .
2.1.5. Electromagnetic Fields in Dielectric Materials
2.1.6. Electromagnetic Waves in Matter . . . . . . .
2.1.7. Energy density, Pointing Vector and Intensity
2.2. Polarization of Light . . . . . . . . . . . . . . . . . . .
2.2.1. The polarization ellipse . . . . . . . . . . . . .
2.2.2. Stokes Vector . . . . . . . . . . . . . . . . . . .
2.2.3. Müller-Matrices . . . . . . . . . . . . . . . . . .
2.3. Fresnel equations . . . . . . . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
. 3
. 3
. 4
. 5
. 6
. 6
. 6
. 7
. 8
. 8
. 9
. 12
. 13
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
3. Method
17
3.1. Imaging Polarimeter . . . . . . . . . . . . .
3.2. Water as dielectric matter . . . . . . . . . .
3.2.1. Polarization of the Sky . . . . . . . .
3.3. Polarimetric Slope Imaging . . . . . . . . .
3.4. Height Reconstruction . . . . . . . . . . . .
3.5. Constraints for the Polarimeter technique
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
4. Experiments and Setup
4.1.
4.2.
4.3.
4.4.
4.5.
Experiments at the Meteor
Stereo Polarimeter . . . .
Setup Meteor . . . . . . . .
Experiments in Hamburg
Setup Hamburg . . . . . .
4.5.1. Setup at the HSVA .
17
18
20
21
22
23
25
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
5. Calibration
5.1. Coordinate Systems . . . . . . . . . .
5.1.1. Pixel Coordinate Frame . . .
5.1.2. Image Coordinate Frame . .
5.1.3. World Coordinate Frame . .
5.2. Coordinate Transformations . . . .
5.2.1. 2D Projective Transformation
5.2.2. 3D Transformations . . . . .
5.3. Camera Matrix . . . . . . . . . . . . .
25
27
27
29
30
30
33
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
33
33
33
34
34
34
35
36
ii
Contents
5.4. Imaging Optics and Optical Aberration . . . . .
5.4.1. Field darkening . . . . . . . . . . . . . . .
5.4.2. Dark Noise . . . . . . . . . . . . . . . . . .
5.4.3. Depth-of-field . . . . . . . . . . . . . . . .
5.4.4. Distortion . . . . . . . . . . . . . . . . . .
5.5. Calibration in Hanau . . . . . . . . . . . . . . . .
5.5.1. Target . . . . . . . . . . . . . . . . . . . . .
5.5.2. Linear Translation Axis . . . . . . . . . . .
5.5.3. Detection of the Target . . . . . . . . . . .
5.6. Polarization Filter Calibration . . . . . . . . . . .
5.6.1. Test of the Polarization Filter Calibration
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
6. Data Processing
6.1.
6.2.
6.3.
6.4.
6.5.
6.6.
Data Acquisition . . . . . . . . . . . .
Radiometric Correction . . . . . . . .
Distortion Correction . . . . . . . . . .
Mapping of the Images . . . . . . . . .
Calculation of the Slope Distribution
Calculation of the Height Distribution
6.6.1. NaN-Reconstruction . . . . . .
6.7. Timing of the Data Processing . . . .
49
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
7.1. Results of the Meteor . . . . . . . . . . .
7.2. Results from Hamburg . . . . . . . . . .
7.2.1. Example Images . . . . . . . . . .
7.2.2. Experimental conditions . . . .
7.2.3. Slope Images . . . . . . . . . . .
7.2.4. Height Reconstruction . . . . . .
7.2.5. Monochromatic Height Spectra
7.2.6. Continuous Height Spectra . . .
7.2.7. Polarimeter Characteristics . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
7. Results
8. Conclusion and Outlook
37
37
38
39
40
42
43
43
45
45
46
49
49
50
50
51
51
52
52
53
53
55
55
57
59
60
62
64
66
69
8.1. Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
8.2. Outlook . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
Bibliography
71
A. Appendix
75
A.1. Rotation Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
A.2. Target in Hanau . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
A.3. Intrinsic Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
1. Introduction
Figure 1.1.: The blue marble, earth. Source: NASA http://visibleearth.nasa.gov/view.php?id=57723
From a satellite’s view, Earth seems to be a beautiful blue marble. This is because about 71
percent of the Earth are covered with water. The vast ocean not only determines the color of
our planet but also influences the climate of the Earth in a profound way1 .
Especially the exchange of gases, momentum, heat and energy between ocean and atmosphere plays an important role for our climate. This can be exemplarily depicted by the uptake
of 30-40% of the anthropogenic CO2 (Donelan and Wanninkhof, 2002). Even after decades of
intensive research, the exchange processes and its physical parametrization are still not fully
understood. It was pointed out that the near-surface turbulence has a significant influence on
the exchange processes (Frew et al., 2004). The mean squared slope, which is a measurement
for small scale waves, describes the near-surface turbulence (Jähne et al., 1987). Hence, the simultaneous acquisition of wave and gas exchange data is necessary to obtain a physically based
parametrization of the exchange processes.
Traditional instruments exist, like wave wires (Donelan et al., 1985) or floating buoys (LonguetHiggins et al., 1963), measuring the elevation of the water surface, which is useful for swell and
long wind waves. To resolve the small scale waves, which have a small amplitude and are superimposed on the large scale waves, an amplitude measurement technique must have a high
dynamic range. The preferred solution is to measure the slope of the surface, since the slope
has a much smaller dynamic range due to the wave breaking of steep waves.
1 NASA Ocean Motion: http://oceanmotion.org/html/background/climate.htm
Surface Ocean Lower Atmosphere Study (SOLAS): http://www.solas-int.org
2
1. Introduction
In the laboratory, the slope of small scale waves can be measured with high temporal and
spatial resolution using an imaging slope gauge (ISG, Rocholz (2008)). This instrument can
measure the slope of the surface from the refraction of light coming from an underwater light
source. For field measurements, installing a light source underwater is often not feasible. Thus,
it is more convenient to use light that is reflected at the water surface for the measurements.
Stilwell (1969) developed a measurement technique that relies on the dependence of the reflection coefficient on the incidence angle of light (known as Stilwell photography). This method
has strong limitations because it requires very homogeneous illumination. Furthermore, the
relation between the slope and the intensity of the reflected light is highly nonlinear (Jähne
et al., 1994).
Another approach that overcomes many problems of the Stilwell photography does not rely
on the intensity of the reflected light, but measures its polarization state (Zappa et al., 2008).
This has two significant advantages: First, the dependence of the used polarization measures
(degree of linear polarization and orientation of polarization) on the incidence angle is not as
nonlinear as the reflected intensity. Second, the measurements are based on the ratio of measured intensities. This makes the measurement independent of inhomogeneities in the illumination.
Studies that have applied polarimetric slope sensing to small scale water waves so far used
very expensive instruments including custom lenses and complex optical setups containing
polarizing beam splitters (Pezzaniti et al., 2008, 2009; Zappa et al., 2012). While this is an elegant solution, it is also very expensive.
For this work, a new approach to an imaging polarimeter was tested, using three cameras
placed next to each other. Each camera is equipped with a standard polarization filter, which
makes the whole setup quite cheap. The disadvantage of this setup is a larger necessity of profound image processing. The aim of this thesis was to build the newly designed polarimeter,
to calibrate the whole system, to develop a data processing structure and to characterize its
performance.
2. Theory
2.1. The electrodynamics of continuous media
2.1.1. Maxwell’s equations
The basic principles of optics and phenomena of the electromagnetic field is, except of quantum effects, completely described by the Maxwell’s equation along with the Lorentz’s equation.
Hence, to get a deeper understanding of the interaction between light and matter and the polarization of light, some general entities of electrodynamics will be stated or derived in the next
sections. But first and foremost the Maxwell’s equations will be stated in a general form, independent of the choice of the electrodynamic units.
Homogeneous equations
∇·B = 0
(2.1)
which equals to, that there are no magnetic monopoles.
∇×E +
∂B
=0
∂t
(2.2)
which correspond to Faraday’s law of induction.
Inhomogeneous equations
∇·E =
ρ
ε0
(2.3)
which relate to to the divergence theorem (or Gauss’s theorem).
∇ × B − µ0 ε0
∂E
= µ0 j
∂t
(2.4)
which is equal to Ampère’s circuital law with Maxwell’s correction.
Together with the Lorentz-force
F L = q · [E + (v × B )]
(2.5)
Maxwell’s equations gather all classical phenomena of the electromagnetic interactions in vacuum. Here, E denotes the electric field vector, B the magnetic induction vector, ρ the electric
charge density, j the electric current density, ε0 the permittivity of vacuum, µ0 the permeability of vacuum and q the electrical charge of the particle, on which the electromagnetic field is
acting.
4
2. Theory
2.1.2. The electromagnetic wave equations or Light
In vacuum there is neither an electric charge density ρ = 0 nor any electrical current density
j = 0. So Maxwell’s equations look like this:
∇ · E = 0;
∇ · B = 0;
∇×E = −
∂B
;
∂t
∇ × B = ε0 µ0
∂E
∂t
(2.6)
These partial differential equations can be decoupled by some vectorial identities, which is
shown in many textbooks1 and give as result the homogeneous wave equation for the E-filed as
well for the B-field.
Ã
!
1 ∂2
1
2
= ε0 µ0
(2.7)
− ∇ E = E = 0
with
2 ∂t 2
c0
c 02
where is the d’Alembert-Operator and c 0 is the speed of light in vacuum. With these substitutions, equations (2.6) reduce to:
E = 0
∇·E = 0
B = 0
∇·B = 0
The solution to these equations is given by a plane wave.
¡
¢
E (x, t ) = ℜ E 0 exp(i (k · x − ωt ))
¡
¢
B (x, t ) = ℜ B 0 exp(i (k · x − ωt ))
(2.8)
(2.9)
(2.10)
Here ℜ denotes the real part of the solution and k and ω the wave vector and the angular frequency respectively.
Properties of the wave solution
This section shows some important properties of the wave solution which will be used in further derivations.
Dispersion relation The dispersion relation is one of the easiest to derive. Inserting Eq. (2.9)
into the Eq. (2.7), the dispersion relation in vacuum gives
ω2 = k 2 c 02 ,
(2.11)
where k = |k|.
Transversality of electromagnetic waves From ∇ · E = 0 and Eq. (2.9) we obtain
k ·E = 0
(2.12)
and the same for B . This means the field vectors E and B are transversal to the propagation direction k.
Orthogonality of E and B From the wave solution Eq. (2.9) and from the Maxwell equation
∇×E = −
∂B
∂t
1 recommended textbook Jackson (1998)
(2.13)
2.1. The electrodynamics of continuous media
5
follows
k × E = ωB
(2.14)
This means that E ⊥ B and E ,B and k span an orthogonal coordinate system. The normalization of E and B are fixed through this equation as well:
(2.15)
|B | = |E |/c 0
2.1.3. Electromagnetic fields in matter
In principle, it is possible to use the same equations as in Sec. 2.1.1 to calculate the electromagnetic field in any kind of material composition. Because this assignment is linked to a huge
computational effort we will just consider temporal and spatial averages of the field, which is
then called the macroscopic view. The effort to compute a microscopic solution is as well not
necessary for most experiments, since in the laboratory it is only possible to control temporal
and spatial averages of the field. Hence we have to distinguish between the microscopic field
and the macroscopic (average) field2 .
Another simplification is that we will only examine linear and isotropic optical materials, where
the electric displacement density D and the magnetic field vector H are proportional to E and
B . We define D and H as:
D = εr ε0 E = εE
H=
(2.16)
B
B
=
µr µ0 µ
(2.17)
Here ε is the permittivity and µ is the permeability of the material (εr and µr are called relative
permittivity and relative permeability respectively).
With these new fields we can rewrite the Maxwell equations as:
Homogeneous Equations
∇·B = 0
∇×E +
∂B
=0
∂t
(2.18)
Inhomogeneous Equations
∇·D = ρf
∇×H −
∂D
= jf
∂t
(2.19)
As one can see, there is no change in the homogeneous equations (Eq. (2.18)) since the
properties of the matter have no influence on them, but the inhomogeneous equations have
changed. Here ρ f and j f denote the electric charge density and the electric current density of
the free charge carrier. The bound charge carrier are polarized by the field, which is included in
the D- and H -field, by the relative permittivity εr and the relative permeability µr . Another note
is that E - and B -field are defined asymmetric, as in Eq. (2.16) and (2.17) shows (ε is in the nominator and µ is in the denominator). This unluckily seeming cause has a practical meaning, that
in experiments it is easier to control the E - and H -field, which are the two independent fields.
2 There will be no change in notation between the microscopic field labels of Sec. 2.1.1 and the macroscopic field
labels of this section, even if the latter one is averaged over time and space
6
2. Theory
2.1.4. Electromagnetic Fields at a boundary
Because we want to deal with at least two different materials, it is not enough to know what
happens inside these two materials (which is given by the equations of Sec. 2.1.3), but as well
to understands the effects of the boundary. The most interesting question at a boundary is,
which component of the field is continuous. Therefore we will first investigate the homogeneous equations (Eq. (2.18)) at a boundary. With the divergence theorem and ∇ · B = 0 it is
possible to show that the normal component of B n is continuous at a boundary. With Stokes’
theorem we can find out from the second homogeneous equation that the tangential component E t is continuous there.
The same can be done with the inhomogeneous equations (Eq. (2.19)) by using these two theorems. Thus, we get
(2)
n × (H (1)
t − Ht ) = i f
(2)
(D (1)
n − Dn ) = γf
(2.20)
where the subscript n ,t denote the normal or tangential component, n the normal vector of the
surface, γ f and i f the free surface charge and the free surface current respectively of the two
materials.
2.1.5. Electromagnetic Fields in Dielectric Materials
In nature, we can sort materials roughly in conductors and isolators (also called dielectric materials). Because conductors are not transparent and act mostly as a mirror, the main focus in
optics is on dielectric materials. What is so special about them? A dielectric material can be a
gas, a fluid or a solid, in which all charge carriers are normally fixed and therefore can’t move.
This has as consequence:
ρf = 0
⇒
γf = 0
and
jf =0
⇒
if =0
(2.21)
This means that at the boundary of a dielectric material the normal component of the B - and
D-field and the tangential component of the E - and H -field are continuous. The continuity is
formulated mathematically like this
n ·B
n ·D
n ×E
n×H
(2.22)
where n is the normal vector to the surface area.
Another interesting property of dielectric materials is, that they are normally not magnetic. This
means that we can ignore the permeability and set µr = 1.
2.1.6. Electromagnetic Waves in Matter
If the macroscopic field equations of Sec. 2.1.3 are combined with the knowledge of Sec. 2.1.5
(especially Eq. (2.21)), the solution looks much like the microscopic wave equation Eq. (2.6) in
vacuum of Sec. 2.1.2. Thus, after some decoupling of the equations they are written like this:
E = 0
D = 0
∇·D = 0
B = 0
H = 0
∇·B = 0
(2.23)
As mentioned in Sec. 2.1.3, we are mostly interested in E - and H fields. Therefore we will only
use these two fields in our further considerations. The solution of the wave equation in matter
for E and H is
¡
¢
E (x, t ) = ℜ E 0 exp(i (k · x − ωt ))
(2.24)
¡
¢
H (x, t ) = ℜ H 0 exp(i (k · x − ωt ))
(2.25)
2.1. The electrodynamics of continuous media
7
The waves have the same properties as discussed in Sec. 2.1.2. Especially the transversality
and the orthogonality of E and H stay the same. These two properties will be important for
the polarization of light (Sec. 2.2) and for the Fresnel equations (Sec. 2.3). The dispersion
relation will be used in the next section.
Phase and Group velocity
As we have seen in Sec. 2.1.2 in Eq. (2.7), a phase velocity is already included in the d’AlembertOperator. For a macroscopic field in matter this velocity cannot be the same as the speed of
light in vacuum c 0 . The easiest way to get the phase velocity is to use the dispersion relation,
like Eq. (2.11), by reinserting Eq. (2.24) into Eq. (2.23).
v ph (ω) = c(ω) =
c0
ω
1
1
=
=p =p
k
εµ
εr µr ε0 µ0 n(ω)
(2.26)
p
Here we have introduced the index of refraction n = εr µr , which is normally frequency dependent, since εr and µr dependent on the frequency. This effect is also called dispersion and
is responsible for such nice effects like a rainbow. Because εr and µr are in general complex,
the refection index can as well be complex. The imaginary part of the refraction index is representing a damping of the electromagnetic wave amplitude in the material. The imaginary part
is important for opaque materials and metals, which we will not consider here any further.
The phase velocity can even get higher than the velocity of light in vacuum (c > c 0 ), which
stands not in conflict with the theory of relativity. This is because the energy and the information is transported with the group velocity v gr :
µ
v gr (ω) =
dω
dk
¶
(2.27)
k=k 0
2.1.7. Energy density, Pointing Vector and Intensity
One of the most important properties of the electromagnetic field or of light is the transport of
energy. The derivation of the energy density E and the Pointing Vector S is as well given in many
textbooks (Jackson, 1998). They are given by
Engergy density:
Pointing vector:
1
2
S = (E × H )
E = (E · D + B · H )
(2.28)
(2.29)
The pointing vector S can as well be regarded as the energy flux density of the energy conservation equation.
∂E
+∇·S = 0
∂t
(2.30)
The pointing vector plays another important role for the measurement of electromagnetic
waves, because it is only possible to measure the energy deposit. Therefore the definition of
the intensity comes up.
I [W m−2 ] = 〈|S|〉 = ε0 nc〈|E |2 〉
1
EM-wave: I = ε0 nc|E 0 |2
2
where 〈 〉 denotes the temporal mean over one period T .
(2.31)
8
2. Theory
2.2. Polarization of Light
Due to the orthogonality of E and H of electromagnetic waves (Sec. 2.1.2 and Sec. 2.1.6, Eq.
(2.14)) it is only necessary to consider one field, which will be the E -field in our case. Without
loss of generality we can choose k to go into z-direction and therefore the transversality of EMwaves (k · E = 0) states that the E must lie in the x, y-plane. Without loss of generality we can
write the solution to the wave equation Eq. (2.24) as such:


E x0 cos(kz − ωt )
(2.32)
E = E x + E y =  E y0 cos(kz − ωt + ϕ) 
0
where E x0 , E y0 are the amplitudes of the E -field in x, y-direction and ϕ is a phase factor between
the x, y-component. The different configuration of these factors (E x0 , E y0 , ϕ) give the different
types of polarization, which will be explained subsequently.
2.2.1. The polarization ellipse
The trace of the E -vector over time at a certain x, y-plane (where z = z 0 ) describes an ellipse.
This can easily be seen if we use some trigonometric function identities and combining the
E x (t ) and E y (t ) component of the E -field. After some mathematical manipulation the equation
of the polarization ellipse arise (see Schott (2009), Brosseau (1998)).
E x2
2
E x0
+
E y2
2
E y0
−2
Ex E y
E x0 E y0
cos ϕ = sin2 ϕ
(2.33)
An illustration of the polarization ellipse is shown in Fig. 2.1.
y
a
b
Φ
2Ey0
x
2Ex0
Figure 2.1.: Illustration of the polarization ellipse with the polarization angle Φ
Equation (2.33) is the general equation of an ellipse, which is rotated by an angle Φ, where
tan 2Φ =
2E x0 E y0 cos ϕ
2
2
E x0
− E y0
(2.34)
Even more generally, the amplitude of E as well as the phase difference (and hence also the angle Φ) can change over time. This means that the polarization ellipse starts to rotate over time.
2.2. Polarization of Light
9
Another aspect of the propagation of elliptical EM-waves is not covered with the above equations, which is the left- or right-handedness of the rotation. As neither elliptical nor circular
polarization is relevant in this thesis, this topic will not be considered in detail further.
Some special cases of the elliptical polarization are the linear polarization and the circular polarization.
Linear Polarization
Linear polarization is a special case of the elliptical polarization, where the phase shift ϕ between the E x and E y components is an integer multiple of π. (ϕ = n · π; n ∈ N). This means that
the E -field oscillates only in one plane. Therefore it is called plane or linear polarization. If we
look at the two cases of ϕ = 0 or π we can write E y
Ey =
E y0
E x0
E x for φ = 0,
Ey = −
E y0
E x0
for φ = π
(2.35)
This is a line with zero intercept and a slope of E y0 /E x0 and we can derive Φ from the slope by
tan Φ =
E y0
E x0
(2.36)
which corresponds to a simplification of equation (2.34).
Since linear polarization is the most common form of polarized light in nature, we will mainly
concentrate on this in the next chapters.
Circular Polarization
Circular polarization is as well a special case of the elliptical polarization, where the phase shift
between the two E -vector components is ϕ = 2·n+1
2 π; n ∈ N and the amplitudes are equal E x0 =
E y0 = E 0 . Hence the ellipse equation (2.33) reduces to:
E x2
E 02
+
E y2
E 02
=1
(2.37)
which is a circle equation.
2.2.2. Stokes Vector
There are many ways to introduce the Stokes vector (see Jackson (1998), Brosseau (1998), Videen
et al. (2005)). But the easiest way, in my opinion, is described by Schott (2009), which we will
follow. The Stokes parameters are one of the most important concepts in this work because
they are easy to measure as we will see later and they describe also partly polarized light. An
unpolarized light ray has no distinct polarization state, which means that it consist of many
EM-waves with all kinds of polarization. A partial polarized light is then a light ray which has
one favored polarization state, but there are as well other ones included in the ray.
To derive the Stokes parameter we start out with the equation of the elliptical polarization Eq.
(2.33), taking the time average over one period and taking the square as for the intensity Eq.
(2.31). This gives after some mathematical reformulation:
2
2 2
2
2 2
(E x0
+ E y0
) = (E x0
− E y0
) + (2E x0 E y0 cos ϕ)2 + (2E x0 E y0 sin ϕ)2
(2.38)
10
2. Theory
As we can see here, there are four important terms, which we will define as the Stokes parameters:
2
2
S 0 = E x0
+ E y0
(2.39)
2
2
S 1 = E x0
− E y0
(2.40)
S 2 = 2E x0 E y0 cos ϕ
(2.41)
S 3 = 2E x0 E y0 sin ϕ
(2.42)
The first parameter S 0 is the easiest to interpret, since it is the squared norm of the E -vector of
Eq. (2.32), were the time average is already taken. Hence we can conclude from Eq. (2.31), that
this parameter represents the total intensity of the light ray. The second parameter S 1 becomes
clear, if we consider only horizontal or vertical polarized light, where only E x0 or E y0 exists,
respectively. Thus the second parameter represents these polarization states. S 2 describes the
polarization in ±45°-direction and last but not least S 3 stand for the amount of left- and righthanded polarization. (For further understanding of the parameters have a look at the definition
of the different polarization states in Sec. 2.2.1 or in the book Schott (2009).)
If we use now the definition of the Stokes parameters Eq. (2.39) and Eq. (2.38) we can derive a
very important property of the Stokes parameters:
S 02 = S 12 + S 22 + S 32
(2.43)
This equation is only valid if the light ray is fully polarized, which is not true for most cases.
Due to the fact that S 0 represents the total energy of the ray, the right hand part of Eq. (2.43)
cannot become larger than S 0 because of energy conservation. Thus a partly polarized ray is
characterized by
S 02 > S 12 + S 22 + S 32
(2.44)
Another property of polarized light can be expressed by the Stokes parameters, namely the
polarization angle Φ, which is defined in Eq. (2.34).
µ ¶
2E x0 E y0 cos ϕ S 2
1
−1 S 2
⇒
Φ
=
tan
(2.45)
=
tan 2Φ =
2
2
S1
2
S1
E x0
− E y0
The Stokes parameters are often arranged in vector form. This vector has no directional
meaning, but the convenience of this representation becomes clear in the next section (Sec.
2.2.3)).




 
2
2
E x0
+ E y0
S0
1

 S /S 
 S   E2 −E2
 1  

 1 0 
x0
y0
(2.46)
S =
=
 = S0 

 S 2   2E x0 E y0 cos ϕ 
 S 2 /S 0 
S3
S 3 /S 0
2E x0 E y0 sin ϕ
Since in polarimetry we are mostly interested in the state of polarization, the normalized Stokes
vector, which is divided by the total intensity, is introduced.
A further fascinating attribute of the Stokes vector is that they obey the superposition principle.



Sc = 

S 0c
S 1c
S 2c
S 3c


 
 
=
 
S 0a
S 1a
S 2a
S 3a
 
 
 
+
 
S 0b
S 1b
S 2b
S 3b


 
 
=
 
S 0a + S 0b
S 1a + S 1b
S 2a + S 2b
S 3a + S 3b



 = Sa + Sb

(2.47)
2.2. Polarization of Light
11
With this principle it is possible to split up the Stokes vector for a partially polarized light into a
vector of the full polarization and a Stokes vector of the unpolarized light. Before we do that we
have to define the degree of polarization (DOP), which defines how much the ray is polarized
on total.
q
S 12 + S 22 + S 32
(2.48)
DOP =
S0
In nature, there is mostly no circular polarized light and therefore we define another concept of
degree of linear polarisation(DOLP), where only linear polarized light is considered.
q
S 12 + S 22
DOLP =
(2.49)
S0
With these definitions we can write a partly polarized light ray as




DoP · S 0
S0


 0 
S1




S partly = S pol + S unpol = 

 + (1 − DOP ) · 


 0 
S2
0
S3
(2.50)
To obtain a deeper understanding of the Stokes vector we put six different ideal polarization
filter, which represent the different parts of the Stokes vector in front of an incident beam and
measure after every filter the intensity. Figure 2.2 pictures the composition.
Figure 2.2.: Put different polarization filters in front of an incidence beam to distinguish it’s Stokes vector.
Source: Schott (2009, p. 39)
The Stokes vector for this setup is given as


I H + IV
 I −I



H
V
S =

 I +45 − I −45 
IR − IL
(2.51)
where I H , I V , I +45 , I −45 , I R , I L describe the intensities of the different polarization states.
Table 2.1 gives an overview over the different polarization states and normalized Stokes vector representations.
12
2. Theory
Table 2.1.: Representation of the different normalized Stokes vector. The S stands for german senkrecht
(horizontal) and the P stands for parallel.
Polarization
State
Symbol
↔
⊥
S
Right-Hand
Circular
Random
∗
Polarization
State
Symbol
l
∥
P
Vertical
Linear -45°
↔
Linear +45°
↔
Horizontal
Stokes
Vector


1
 1 




 0 
 0 
1
 0 




 1 
 0 
1
 0 




 0 
 1 
1
 0 




 0 
0
Left-Hand
Circular
Stokes
Vector


1
 −1 




 0 
 0 
1
 0 




 −1 
 0 
1
 0 




 0 
−1
2.2.3. Müller-Matrices
The Stokes vector provides a description of the polarization states and the intensity of a beam.
But we do not have a description of the interaction, when this beam is transmitted or reflected
by a material. Here the Müller matrix comes up, which will represent the interaction of a beam
by transforming the Stokes vector. The incoming beam S in will then be converted to the outgoing beam S out like

S0
 S
 1

 S2
S3


m 00
 m
 10
=
 m 20
m 30
out




m 01
m 11
m 21
m 31
m 02
m 12
m 22
m 32
S out = M · S in
 

m 03
S0


m 13 
  S1 
·

m 23   S 2 
m 33
S 3 in
(2.52)
where M is the Müller matrix describing the property of the optical element.
The easiest way to understand Müller matrices is by looking at different examples of polarizers.

ideal horizontal polarizer:
Ms =
Mp =

1 −1 0 0
−1 1 0 0
0
0 0 0
0
0 0 0

1


2

ideal vertical polarizer:
0
0
0
0
1


2
1
1
0
0
1
1
0
0
0
0
0
0








(2.53)
(2.54)
2.3. Fresnel equations

M +45 =
ideal +45° polarizer:
1


2

M −45 =
ideal -45° polarizer:
1
0
1
0
1


2


M depol = 

1
0
1
0
0
0
0
0





(2.55)

0 −1 0
0 0 0 


0 1 0 
0 0 0
1
0
−1
0

ideal depolarizing filter:
0
0
0
0
13
1
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
(2.56)





(2.57)
The factor of 21 in front of the polarizer is important for energy conservation.
Another property of the Müller matrices is that we can combine them by multiplying. If we
send a beam S in first through a horizontal polarizer and afterwards through a +45° polarizer,
the result would be
S out = M +45 · M s · S in
(2.58)
As we have seen in Sec. 2.2 we can choose our frame of reference, where we measure the
components of the Stokes vector. The transformation to another reference frame can as well be
represented by a Müller matrix. A change of the reference frame along the beam is equal to a
rotation with the angel θR . Thus the rotation Müller matrix is given by





S0
S1
S2
S3
S θ = R(θR ) · S in


1
0
 0 cos 2θ



R
 =

 0 − sin 2θR
0
0
θ
0
sin 2θR
cos 2θR
0
0
0
0
1
 
 
 
·
 
S0
S1
S2
S3





(2.59)
in
As one can easily verify, the rotation has no effect on the circular polarization or on DOP or
DOLP, but it has some effect at the polarization angle Φ, which is very intuitive.
2.3. Fresnel equations
In this section we want to look at an incoming light ray that is reflected and transmitted at
a surface of a dielectric material. These considerations can also be done for non-dielectric
materials and also the Fresnel equations are valid then. For simplicity, we want to look only
at simple linear isotropic dielectric materials, where µr = 1 and the refraction index is only
p
governed by n = εr a non complex permittivity.
A ray with the wave vector k i is coming in the material with the refraction index n 1 at a surface
of the material with refraction index n 2 . This incoming beam gives rise to a reflected beam k r
and a transmitted beam k t . The situation is depicted in Fig. 2.3. To obtain all relevant effects,
we have to take the boundary conditions of Sec. 2.1.2 and Sec. 2.1.5 into account. The first
14
2. Theory
Horizontal Polarization
Parallel Polarization
(senkrecht)
His
ki
Hrs
Eis
n1
n2
Eip
ki
kr
Erp
Hip
Ers
θi θr
z=0
n1
n2
θt
kr
Hrp
θi θr
z=0
θt
Ets
Hts
kt
Htp
Etp
kt
Figure 2.3.: An incidence beam with wave vector k i gets reflected (wave vector k r ) and transmitted (wave
vector k r ) at a boundary between the two materials with the index of refraction n 1 , n 2 respectively. For the
horizontal polarization the field vector E s is looking out from the paper (indicated by the dot in the circle). For
the parallel polarization the field vector H p is looking in the paper (indicated by a cross in the circle).
boundary condition is that at the surface all spatial and also temporal changes of the wave
must be the same. This means that at any moment t the phase factor has to be the same. This
means the following condition must hold at the surface z = 0
(k i x)z=0 = (k r x)z=0 = (k t x)z=0
(2.60)
This must hold for an arbitrary x at the surface and hence
ki∥ = kr ∥ = kt∥ ⇒
k i · sin θi = k r · sin θr = k t · sin θt
(2.61)
where k = |k| and θ is the angle to the surface normal. Since the magnitude of the incoming
and reflected wave vector must be the same k i = k r , the commonly known reflection principle
is derived, where incidence angle is reflection angle.
θi = θr
(2.62)
For the angle of refraction we have to consider the two different indexes of refraction with the
dispersion relation k i c i = ω = k t c t .
k t c i n 2 sin θi
= =
=
k i c t n 1 sin θt
(2.63)
This is Snell’s law of refraction.
From the continuity of the E -field n × E at a dielectric boundary (see Eq. (2.22)) we can obtain
the Fresnel’s equation for horizontal polarization (horizontal to the plane of incidence), which
will be denoted by s for senkrecht. Here, the field vector E is always tangential to the surface
independent of the incident angle. The continuity condition can therefore be written as
E is + E rs = E ts ,
(2.64)
2.3. Fresnel equations
15
whereas the field vectors H are always perpendicular to the E vector and thus, they must be
projected with a cos θ at the surface normal. With equation (2.15) and µr = 1 the magnitude of
H is given by H = µ0nc0 E . Hence the continuity of the tangential component of n × H leads to
n 1 E is cos θi − n 1 E rs cos θr = n 2 E ts cos θt
(2.65)
After some mathematical manipulation of Eq. 2.64 and Eq. (2.65) with Eq. (2.62) we get the
ratio of reflection for the horizontal (German: senkrecht) polarization
rs =
E rs n 1 cos θi − n 2 cos θt
=
E is n 1 cos θi + n 2 cos θt
(2.66)
The same can be done for the ratio of transmission, but since we are only interested in the
reflection part, it will not be listed here.
The refraction index in Eq. (2.66) can be eliminated by Snell’s law Eq. (2.63). To get the reflection
coefficient we have to calculate the ratio of reflection for intensities, which are given by Eq.
(2.31). Because all factors in front of the E -field cancel out for reflection, we just have to square
the factors. This yields the horizontal reflection coefficient
Rs =
2
I rs E rs
sin2 (θi − θt )
= 2 =
I is E is sin2 (θi + θt )
(2.67)
Now the same can be done if the field vector E is parallel to the plane of incidence. Here we can
use the same boundary conditions as for the horizontal polarization, but we have to exchange
the E with the H field. This gives the following conditions
Hip + Hrp = Htp
⇒
n 1 E ip + n 1 E rp = n 2 E tp
E ip cos θi − E rp cos θi = E tp cos θt
(2.68)
(2.69)
This yields the ratio of reflection for the parallel polarization
rp =
E rp
E ip
=
n 2 cos θi − n 1 cos θt
n 2 cos θi + n 1 cos θt
(2.70)
This result looks quite similar to the result of the horizontal polarization Eq. (2.66), but with the
refraction indices swapped. This has a great impact if we use again Snell’s law Eq. (2.63). The
parallel reflection coefficient for the intensities is then given as
Rp =
I rp
I ip
=
2
E rp
2
E ip
=
tan2 (θi − θt )
tan2 (θi + θt )
(2.71)
These two reflection coefficients (Eq. (2.67) and Eq. (2.71)) together with the transmission coefficients for horizontal and parallel to the plane of incidence polarized light are called the Fresnel
coefficients.
Another interesting property can be calculated with the Fresnel coefficients, namely the Brewster angle. The Brewster angle θB is defined as the angle, where the parallel polarization of the
reflected light vanishes and only horizontal polarization is present. Therefore at the Brewster
angle the degree of linear polarisationis DOLP = 1. For the angle between the transmitted parallel polarized beam θt and the reflected horizontal polarized beam θr one has θr + θt = 90°
(Zinth and Zinth, 2013). From this and the law of reflection θr = θB the following relation for
the Brewster angle can be derived:
Brewster Angle: tan θB =
n2
n1
(2.72)
16
2. Theory
1
0.9
Horizontal Pol. Rs
Parallel Pol. Rp
DOLP
0.8
Reflectance
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
0
10
20
30
40
50
60
Incidenc Angle θi [°]
70
80
90
Figure 2.4.: Reflectance for the horizontal (R s ) and parallel (R p ) polarization and degree of linear polarisation(DOLP) plot against the incidence angle θi . The Brewster angle θB is indicated by the dashed line.
Material properties: n 1 = 1, n 2 = 1.5
Fig. 2.4 shows the reflection coefficients and the degree of linear polarisation(DOLP) for different incidence angles θi .
The Fresnel reflection or transmission coefficients can also be included in a Müller matrix
to describe the interaction of a incoming beam at a surface. The Müller matrices for reflection
and transmission are given in Brosseau (1998), Schott (2009) and Kattawar and Adams (1989).


α+η α−η 0
0
 α−η α+η 0
0 


(2.73)
Reflection Müller matrix
R(θi , θt ) = 

 0
0
γRe
0 
0
0
0 γRe

Transmission Müller matrix


T (θi , θt ) = 

α0 + η0
α0 − η0
0
0
α0 − η0
α0 + η0
0
0
0
0
γ0Re
0
0
0
0
γ0Re





(2.74)
where
µ
¶
1 tan(θi − θt ) 2
α=
2 tan(θi + θt )
tan(θi − θt ) sin(θi − θt )
γRe =
tan(θi + θt ) sin(θi + θt )
µ
¶2
1
2 cos θi sin θt
0
α =
2 sin(θi + θt ) cos(θi − θt )
4 cos2 θi sin2 θt
γ0Re =
sin2 (θi + θt ) cos(θi − θt )
µ
¶
1 sin(θi − θt ) 2
η=
2 sin(θi + θt )
µ
¶
1 2 cos θi sin θt 2
η =
2 sin(θi + θt )
0
Here, α, η, γRe , and α0 , η0 , γ0Re represent the Fresnel reflection or transmission coefficients respectively for the different incident (θi ) and transmission angle (θt ).
3. Method
3.1. Imaging Polarimeter
A polarimeter is a measurement instrument that can measure the polarization state and hence
the Stokes vector of the incoming light. An imaging polarimeter is a polarimeter, which measures the polarization state of an object for every image position. This can be achieved with
a setup with several cameras. Since a CCD or CMOS camera measures only intensities, which
corresponds to the incoming power (see Sec. 2.1.7), different polarization filters in front of the
cameras are necessary to measure all components of the Stokes vector (like in Fig. 2.2). At least
four linear independent measurements are needed to gain the full information about the four
component Stokes vector. If one can assume, that no circular polarized light is present, the last
component of the Stokes vector S 3 can be neglected. Therefore only three cameras with three
linear polarization filters are required to gather the first three components of the Stokes vector
(S 0 , S 1 , S 2 ).
t
First we want to examine an incoming light ray, which has the Stokes Vector S in = [S 0 , S 1 , S 2 ]in
,
going through a polarization filter with the adjusted angle α. After the filter the intensity I α is
measured, which corresponds to the first component of the Stokes vector S out . As we have seen
in Sec. 2.2.3 the effect of a linear polarization filter can be described in terms of Müller calculus
by a Müller matrix. The Müller matrix for an ideal linear polarization filter with the polarization
angle α is given as:

M lin.pol. (α) =
1


2
1
cos 2α
sin 2α
cos 2α
cos2 2α
cos 2α sin 2α
sin 2α cos 2α sin 2α
sin2 2α
0
0
0
0
0
0
0





(3.1)
Since the cameras themselves are insensitive to polarization and can only measure the total
intensity S 0out , we just have to use the first column of the Müller matrix to link the incoming
Stokes vector to the intensity.


S0
¤
b £
I α = S 0out = · 1 cos 2α sin 2α ·  S 1 
2
S 2 in
(3.2)
where b is a factor to correct that the polarization filter is not perfect, α is the angle of the filter.
The factor b f depends on the polarization filter quality and on the efficiency of the intensity
measurement system and is therefore normally constant over time. As we can see in Eq. (3.2)
the relation between the measured intensity and the incoming Stokes vector is linear if α is not
changing. As mentioned before, the minimum requirement to measure the first three components of the incoming Stokes vector is therefore to obtain at least three intensities for three
different polarization filter orientations α. As explained in Sec. 4 we have used three cameras
each with a different polarization filter angle. To use the full angular resolution of the polarization of 180°, we set the polarization filter at α = 0°, 60°, 120°. Since the relation in Eq. (3.2) is
linear for each of the three intensities, it is possible to reverse it to obtain the Stokes vector of
18
3. Method
the incoming light:


S0
a 11



S in = S 1
= A · I = a 21
S 2 in
a 31

a 12
a 22
a 32
 

a 13
I1
a 23  ·  I 2 
a 33
I3
(3.3)
Here the matrix A is the analyzer matrix and describes, how the measured intensities of the
polarimeter are related to the Stokes vector. This matrix can be found by calibration of the
system, see Sec. 5.6. The analyzer matrix is therefore the heart of the polarimeter, because it
enables us to measure the first three components of the Stokes vector.
3.2. Water as dielectric matter
As we have seen in Sec. 2.1.5 dielectric materials have some special characteristics, which are of
importance for optics. Water is such a dielectric matter and has an index of refection of n water =
1.33 in our used bandwidth (490 nm to 740 nm) at a temperature of T = 20°C (Daimon and
Masumura, 2007). Because water is a dielectric matter the Müller matrix of the Fresnel equation
for reflection Eq. (2.73) and transmission Eq. (2.74) between air and water surface are valid.
(The refraction index of air is taken as that of vacuum n air = 1.) The Müller calculus describe
how an incoming Stokes vector S in gets changed by the interaction at the air-sea surface. For
the measurement of waves on the water surface, sky light is used for illumination. The reflected
Stokes vector S R is than given as:
S R = R AS (θ) · S sky
(3.4)
with the Stokes vector of the incoming sky light S sky . Here R AS (θ) is the Müller matrix of reflection (Eq. (2.73)) at the Air-Sea interface. Eq. 3.4 shows that the polarization state depends on
the incidence angle of the reflected light, which is the basis for the polarimetric slope images
technique (see Sec. 3.3). To get a functional relationship between the incidence angle θ and the
polarization state, the degree of linear polarisation (DOLP) will be calculated. If we assume that
the sky is unpolarized, which means that the normalized Stokes vector is S sky = S up = [1, 0, 0, 0]t
the DOLP of the reflected Stokes vector S R can be calculated with Eq. 2.49 as:
DOLP (θ, n) =
α(θ, n) − η(θ, n)
α(θ, n) + η(θ, n)
(3.5)
This equation is the key point of the polarimeter wave measurement technique, because it links
the polarization to the incidence angle of the incoming light. Figure 3.1 illustrates the dependency of the degree of linear polarisation with the angle of incidence θ. The Brewster angle for
water θB = tan(n water ) = 53° is indicated by a vertical dashed line.
Because DOLP(θ) is not a monotone function (see Fig. 3.1), it is not possible to invert the
relation for the whole rang of the incidence angle θ. The inversion from DOLP to θ is possible
between 0° and the Brewster angle θB = 53° or between the Brewster angle and 90°. We have
chosen to look under an angle of θpol = 37° at the water surface because the DOLP is rising
there almost linear, which can be seen in Fig. 3.1b. Figure 3.1b shows the absolute derivative
of |∂DOLP /∂θ| over the incidence angle θ. The inflection point, where the gradient is at the
maximum and the curve is almost linear, is close to θ = 37°.
In the range between the Brewster angle θB and 90° the inversion from DOLP to θ is very sensitive. Also the reflection coefficients and therefore the amount of reflected light would be high.
Still it is not practicable to measure in this region, because the angle of view of the polarimeter
3.2. Water as dielectric matter
a
19
b
1
0.04
0.9
0.035
|∂ DOLP / ∂ θi| [1/degree]
0.8
0.7
DOLP
0.6
0.5
0.4
0.3
0.03
0.025
0.02
0.015
0.01
0.2
0.005
0.1
0
0
10
20
30
40
50
60
incidence angle θi [°]
70
80
90
0
0
10
20
30
40
50
60
incidence angle θi [°]
70
80
90
Figure 3.1.: a degree of linear polarisation for the reflection at a air-sea interface calculated from the Fresnel
reflection coeffiecents Eq. (3.5) b Absolute derivative of |∂DOLP /∂θ| over the incidence angle θ . The Brewster
angle of θB = 53° is indicated by a black dashed line.
would be so low, that large waves would hide some parts of the measurement area.
In the previous derivation of the degree of linear polarisation the factor of the upwelling light
was neglected. To get the total Stokes vector S tot that is seen by the Polarimeter, the effect of
the upwelling light from beneath the water surface has to be taken into account. The upwelling
light with the Stokes vector S up gets transmitted through the sea surface. Thus the transmission
Stokes vector S T is given as
S T = T AS (θ 0 ) · S up
(3.6)
where T AS (θ 0 ) is the Müller matrix of transmission (Eq. (2.74)) between an Air-Sea interface.
The total Stokes vector is therefore given as
S tot = S R + S T = R AS · S sky + T AS · S up
(3.7)
If we make the same assumption as before, that the sky and the upwelling light is unpolarized
which means that the normalized Stokes vectors are S sky = S up = [1, 0, 0, 0]t the DOLP of the
total Stokes vector S tot can be calculated with Eq. 2.49 as
DOLP (θ, n) =
α(θ, n) − η(θ, n) + u(θ, θ 0 ) · (α0 (θ 0 , n) − η(θ 0 , n))
α(θ, n) + η0 (θ, n) + u(θ, θ 0 ) · (α0 (θ 0 , n) + η0 (θ 0 , n)
(3.8)
where α, η, θ are the coefficients of the reflection and α0 , η0 , θ 0 transmission Fresnel formula.
u(θ, θ 0 ) = S 0T (θ 0 )/S 0R (θ) is the ratio between the transmitted (or upwelling) and the reflected
light intensity (Zappa et al., 2008). The upwelling light changes the degree of linear polarisation
which is needed to calculate the surface slope. Hence it is important for an accurate measurement of the reflected DOLP that the upwelling light does not play a role which means that
u(θ, θ 0 ) =
S 0T (θ 0 )
→0
S 0R (θ)
(3.9)
This leads again to the much simpler form of DOLP(θ) Eq. 3.5 from before. Not only the upwelling light is disturbing Eq. (3.5) but also the incoming polarization from the sky, which will
be discussed in the next section.
20
3. Method
3.2.1. Polarization of the Sky
The sky consists of many different particles, like gases and aerosols, all with a different size
distribution. Therefore the electromagnetic waves of light interact with the particles and get
scattered. The scattering process depends on the size of the particle and is mainly divided into
two different regimes. The first is the Rayleigh scattering, which applies if the particle size is
much smaller than the wavelength (2πr << λ) and the other is Mie scattering, which is valid if
the particle size is equal or greater than the wave length (2πr ≥ λ). Here r is the radius of the
particle and λ is the wave length. Because the scattering direction for Mie scattering is mainly
in forward direction, it can be included in a first order correction to the Rayleigh sky model.
The Rayleigh sky model describes the polarization of the incoming sun light at a clear sky due
to Rayleigh scattering. A description of the sky model can be found in Schott (2009) and an
application of the Rayleigh sky model together with a polarimetric slope sensing can be found
in Barsic and Chinn (2012). Lee (1998) showed with polarimetric images of the clear sky, that
the degree of linear polarisation can even reach DOLP = 1 at an angle of 90° to the sun.
With cloud cover the situation changes significantly because in clouds the prevailing scattering type is Mie scattering. Pust and Shaw (2006) showed that the sky gets nearly unpolarized if
clouds are on the sky. Horváth et al. (2002) used this phenomenon to detect clouds.
b
a
0.2
50
50
100
100
150
150
200
200
0.12
250
250
0.1
300
300
350
350
400
400
0.18
0.16
0.14
0.08
0.06
0.04
450
100
200
300
400
500
600
0.02
450
100
200
300
400
500
600
Figure 3.2.: Images of the polarization of the sky taken at 14:50 on the 04.07.2013. Looking direction:
South. a First component of the Stokes vector S 0 , where the clear sky is dark and the clouds are white.
b Overlay of DOLP onto the S 0 image. The depolarizing effect of the clouds can be seen.
Figure 3.2 shows an image taken with the polarimeter on the 04.07.2013 at 14:50 (local time)
of the south part of the sky with some clouds. The sun position was El ev at i on = 51.77°,
Azi mut h = 238.79° at the geographical position Lat i t ud e = 49.417 29°N , Long i t ud e = 8.674 02°E 1 .
This means, that the sun was above the top right corner of the image. Since the maximal degree of linear polarisation is reached at 90° to the sun, the DOLP looking almost in the direction
to the sun gets not more than DOLP ≤ 0.25. Fig. 3.2a shows the S 0 component of the Stokes
vector, to understand the scene. (Sky is mostly dark and the clouds are white.) Fig. 3.2b is an
overlay of DOLP onto the S 0 image in color to show the depolarizing effect of clouds.
1 Sun position calculated with http://www.sunearthtools.com/dp/tools/pos_sun.php
3.3. Polarimetric Slope Imaging
21
3.3. Polarimetric Slope Imaging
Equation (3.5) is the main equation for the polarimetric slope imaging technique. With the
inversion of the equation, from DOLP to θ, it is possible to measure the incidence angle of light
onto the water surface. From this and the known angle of view of the polarimeter the slope of
the surface can be computed. Since the inversion is not unique it is only possible to invert the
equation in the range from θ = 0° − 53°. Together with the polarization angel Φ (Eq. (2.34)) it is
possible to recover a two dimensional slope field from the reflected light at the air-sea surface.
If η(x, t ) is the water surface elevation, the polarimetric slope imaging measures the gradient of the elevation [s x , s y ]t = ∇η(x, t ). This relation will be later used to recover the surface
elevation except for an integration constant (see Sec. 3.4).
Figure 3.3.: Geometrical representation of the light path for polarimetric measurements. The reflection
surface is defined by the surface normal vector. The angle θ is defined by the vector of the incidence light
and the surface normal vector and can be measured with DOLP. The slope of the surface to the horizontal is
defined by the angle φ. (Attention: φ = Φ + 90°.) Source: Zappa et al. (2008)
Figure 3.3 shows the geometric relationship between the surface slope and the incidence angle θ (measured with DOLP) and the surface orientation φ to the polarization angle Φ, where
φ = Φ + 90°. As derived in Sec. 2.3, the incidence angle and the reflection angel are equal (Eq.
(2.62)) and the reflected beam and the surface normal lie in the same plane. The surface orientation φ can than be seen as the angle between the X-axis of the imaging plane and the plane
of reflection. Therefore the orientation of the surface normal is determined by the angles θ, φ
relative to the camera reference system.
Because the camera is looking tilted onto the water surface, a projective transformation from
the image plane onto the water surface is needed to obtain the water surface slope in Cartesian
coordinates, see Sec. 5.2.
Because the system is measuring angles, the angle of view of the camera must be taken into
account. This can be done by imposing that the water surface must be flat for a long term
average slope. Thus, we can subtract an long term slope average from all images to correct for
the angle of view.
To calculate the slope of the surface, we have to do an transformation from θ, φ to s x , s y . The
angular slope of the surface facet can be calculated like that:
X comp = − cos φ · θ = sin Φ · θ
Ycomp = sin φ · θ = cos Φ · θ
(3.10)
22
3. Method
where φ = Φ + 90° was used. To get the actual slope the tangents have to be taken from the
X comp , Ycomp .
s x = tan(X comp ) = tan (sin Φ · θ)
s y = tan(Ycomp ) = tan (cos Φ · θ)
(3.11)
From these two slope maps an elevation map can be calculated, which will be done in the next
section.
3.4. Height Reconstruction
Since the slope in X- and Y-direction (s x , s y ) corresponds to the gradient of the surface elevation
h(x) of the water, it is possible to reconstruct the water height except for an additive constant,
which corresponds to the constant of integration. The first attempt to get the height from an
gradient field would be integration of the two slope components. The height is then depending
on the integration path. Hence for one pixel the height must be calculated from many integration paths, which means, that this method is computational exhausting. Thus a nowadays very
commonly used method was proposed by Frankot and Chellappa (1988), which was already
successfully applied on water surface slops, see Zhang (1996), Balschbach (2000), Fuß (2004)
and Rocholz (2008). This method uses some useful properties of the Fourier domain. A quantity in Fourier domain will be indicated by a ^. The definition and the properties of the spatial
Fourier transformation (F T ) can be found in Jähne (2005). The starting point is the transformation of the height gradient into Fourier space.
Spatial domain
∂h(x)
∂x
∂h(x)
s y (x) =
∂y
s x (x) =
◦−•
Fourier domain
◦−•
ŝ x (k) = i k x ĥ(k)
⇒
i k x ŝ x (k) = −k x2 ĥ(k)
◦−•
ŝ y (k) = i k y ĥ(k)
⇒
i k y ŝ y (k) = −k y2 ĥ(k)
(3.12)
Here it was used that a partial derivative ∂x in Fourier domain is a multiplication with i k x .
Another step was to multiply the equations with i k x and i k y respectively. Now the Fourier
transformed of the height can be written as:
ĥ(k) =
−i (k x ŝ x (k) + k y ŝ y (k)
(3.13)
(k x2 + k y2 )
The denominator (k x2 +k y2 ) = |k 2 | is the quadratic norm of k. The equation can’t be evaluated at
(k x2 +k y2 ) = |k 2 | = 0, which means that the mean height and the mean slope cannot be recovered
with this method. To recover the real height, equation (3.13) must be transformed into real
space with the inverse Fourier transformation F T −1 . The formula for the height reconstruction
is therefore:
Ã
h(x) = FT−1
−i (k x ŝ x (k) + k y ŝ y (k)
(k x2 + k y2 )
!
(3.14)
3.5. Constraints for the Polarimeter technique
23
3.5. Constraints for the Polarimeter technique
As we have seen in the previous sections, there are some constraints on the polarimeter slope
imaging technique, so that it is working properly. To summarize and characterize all constraints
a list of them will be given here.
No Upwelling Light As we have seen in Eq. (3.8) for DOLP, the upwelling light from underneath the water surface makes the inversion of the relation of DOLP and the angle of
incidence θ nearly impossible. In clear and deep water (e.g. in the open ocean) this is not
a problem, because the light is absorbed within about 200 m. In turbid water, e.g. coastal
areas with a lot of biological activity or turbid rivers, light scattered from suspended particles close beneath the surface can be a problem. In the laboratory, the absorption of
water can be increased by adding a dye that absorbs in the bandwidth of the polarimeter.
Unpolarized Incoming Light Eq. (3.5) was derived for an unpolarized incoming light. Hence,
if the incoming light is polarized the relation between DOLP and the incidence angle θ
gets changed. If the incoming polarization is known, like from a Rayleigh sky model or
from polarization measurements of the incoming light, it is possible to conclude again
from DOLP onto the incidence angle θ. If the polarization of the incoming light cannot
be measured, it has to be verified that the incoming light is unpolarized. The incoming
sky light can be seen as unpolarized, if the sky is completely overcast. (see Sec. 3.2.1).
Sufficient illumination The reflectivity of water can be calculated from the Fresnel coefficients (Eq. (2.67) and Eq. (2.71)) and reaches from 2.0% at θ = 0° to 3.8% at θ = 53°. This
means that not very much light is reflected at the water surface. Thus, it is necessary to
have enough light that can be reflected so that the exposure time can be set low enough
to capture the high frequency waves. Especially for inside experiments it has to be paid
attention to a proper illumination.
Pixel size small enough Due to the nonlinearity of the dependence of DOLP on surface
slope (Eq. (3.5) and Fig. 3.1), the slope should not change significantly over the area
that is projected onto one pixel. This means that even in studies where long waves are of
interest, the scale of the smallest occuring waves determines the requirement for spatial
resolution. In studies of wind generated waves, where capillary waves are abundant, this
means that if large areas are to be observed, large image resolution is required.
4. Experiments and Setup
There were two major experiments with the polarimeter. One was conducted on board of the
research vessel Meteor for during one month and the other one was operated at the Hamburgische Schiffsbau-Versuchsanstalt (HSVA) in Hamburg.
4.1. Experiments at the Meteor
It was possible to deploy the Stereo Polarimeter on the Meteor M91 cruise1 in front of the Peruvian coastline. The cruise started at the 1st of December 2012 in Callao (Peru) and ended at the
26th of December 2012 in Callao as well. The cruise was part of the SOPRAN 2 project, where
two PhD students of our group, Daniel Kiefhaber and Leila Nagel, were taking part. On board
of the ship the Stereo Polarimeter was operated by Daniel Kiefhaber. Due to the short building
time and the early date of the shipment from Heidelberg to Callao at the 12th of October 2012,
the setup was barely tested. The setup of the Stereo Polarimeter at the METEOR is described
in Sec. 4.3. Table 4.1 lists all measurement stations where the Stereo Polarimeter was running,
with the station names and positions taken from the cruise logbook.
Figure 4.1.: Route of the Meteor (black) in front of Peru with the measurement stations of the StereoPolarimeter marked by a red cross. Map generated with Matlab® WebMap.
1 METEOR M91 Cruise Report: http://www.ifm.zmaw.de/fileadmin/files/leitstelle/meteor/M90_M93/M91-SCR.
pdf
2 SOPRAN - Surface Ocean Processes in the Anthropocene: http://sopran.pangaea.de/
26
4. Experiments and Setup
Table 4.1.: Overview of all measurement stations at the Meteor where the Stereo Polarimeter was acquiring
data. The Station name, Date, Time and Position were taken from the logbook of the Meteor.
Num
Station
Date
Time
Position Lat
Position Lon
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
1728-1
1728-2
1732-2
1745-2
1746-1
1746-2
1750-1
1750-2
1750-3
1752-5
1752-6
1757-1
1761-3
1762-1
1764-1
1764-4
1764-5
1764-9
1764-10
1766-1
1766-2
1769-2
1770-2
1770-3
1772-3
1772-4
1773-1
1773-3
1774-1
1774-2
1776-1
1776-2
1777-3
1777-5
1777-9
1778-1
1778-2
06.12.2012
06.12.2012
07.12.2012
11.12.2012
11.12.2012
11.12.2012
12.12.2012
12.12.2012
12.12.2012
13.12.2012
13.12.2012
15.12.2012
16.12.2012
16.12.2012
17.12.2012
17.12.2012
17.12.2012
17.12.2012
17.12.2012
18.12.2012
18.12.2012
19.12.2012
19.12.2012
19.12.2012
20.12.2012
20.12.2012
20.12.2012
20.12.2012
21.12.2012
21.12.2012
22.12.2012
22.12.2012
22.12.2012
22.12.2012
23.12.2012
23.12.2012
23.12.2012
21:33:00
22:14:00
19:31:00
19:46:00
21:29:00
21:54:00
14:44:00
16:07:00
17:37:00
14:05:00
18:38:00
10:02:00
18:30:00
22:44:00
08:34:00
12:11:00
16:02:00
22:00:00
22:30:00
17:27:00
18:58:00
14:46:00
21:07:00
22:11:00
10:58:00
11:32:00
16:19:00
20:32:00
19:40:00
21:15:00
09:30:00
10:58:00
18:34:00
20:31:00
12:34:00
20:01:00
20:39:00
8°8.460
8°8.400
9°19.790
12°2.390
12°2.390
12°2.420
12°2.380
12°2.440
12°3.790
12°56.960
12°57.030
12°6.930
13°8.400
13°25.780
14°7.230
14°8.770
14°8.790
14°11.100
14°11.110
14°26.990
14°27.070
15°2.930
15°19.680
15°19.700
15°54.150
15°54.210
16°10.710
16°9.380
16°1.150
16°1.140
15°41.400
15°41.460
15°31.190
15°32.440
15°35.190
15°22.760
15°22.830
80°7.200
80°7.190
78°58.190
77°22.210
77°29.410
77°29.420
78°30.020
78°30.010
78°29.650
78°41.430
78°41.430
77°17.500
76°31.800
76°22.190
76°52.230
76°53.890
76°53.930
76°55.990
76°56.010
77°28.230
77°28.330
77°47.390
77°32.020
77°32.030
77°3.560
77°3.620
76°48.250
76°49.280
76°30.140
76°30.730
75°54.020
75°54.010
75°36.030
75°36.840
75°38.240
75°19.910
75°20.040
S
S
S
S
S
S
S
S
S
S
S
S
S
S
S
S
S
S
S
S
S
S
S
S
S
S
S
S
S
S
S
S
S
S
S
S
S
W
W
W
W
W
W
W
W
W
W
W
W
W
W
W
W
W
W
W
W
W
W
W
W
W
W
W
W
W
W
W
W
W
W
W
W
W
Recording
time
70.0 s
60.0 s
128.0 s
90.0 s
120.0 s
400.0 s
120.0 s
420.0 s
145.0 s
200.0 s
415.4 s
60.0 s
2080.0 s
120.0 s
420.0 s
60.0 s
1020.0 s
60.0 s
540.0 s
120.0 s
480.0 s
960.0 s
120.0 s
1020.0 s
60.0 s
720.0 s
480.0 s
360.0 s
120.0 s
840.0 s
60.0 s
840.0 s
540.0 s
1223.0 s
1320.0 s
60.0 s
600.0 s
4.2. Stereo Polarimeter
27
4.2. Stereo Polarimeter
The idea of the Stereo Polarimeter was to combine stereo height measurements (Benetazzo,
2006; Schumacher, 1939) with slope measurements of the polarimeter (Zappa et al., 2008), to
gain information about large and small scale waves. My task was the development of the polarimetric slope imaging technique, but for documentation reasons the stereo measurements
will be illustrated here as well.
The Stereo Polarimeter consists of two equal polarimeter boxes, each with three cameras
equipped. For cooling reasons, the cameras are installed on an aluminium block parallel to
each other with a displacement of 3.5 cm. Some space is left for an optional fourth camera. A
power supply for the cameras and a fan for cooling inside the box was installed as well. On each
camera of the type Basler acA2500-14gm (for the specifications see Tab. 4.2) a TAMRON lens3
with 16 mm focal length is installed. At the lens a polarisation filter from Schneider-Kreuznach4
is mounted. Because the polarisation filter is only working in the bandwidth from 420 nm to
780 nm a yellow filter5 and an IR-blocking filter6 (as front window) are placed in front of the
polarisation filter. The bandwidth is therefore limited from 490 nm to 740 nm. If necessary,
the cameras can be triggered externally with a function generator. A picture of the inside of a
polarimeter box and the setup of the optical components is given in Fig. 4.2.
Table 4.2.: Specifications of the camera used.
model=170&language=en
Vendor
Model
Sensor type
Sensor diagonal
Lens Mount
Resolution horizontal/vertical
Pixel Size horizontal/vertical
Pixel Bit Depth
Maximum Frame Rate
Synchronization
Interface type
Source: http://www.baslerweb.com/products/ace.html?
Basler
acA2500-14gm
Progressive Scan CMOS, rolling shutter
Diagonal 7.13 mm, Optical Size 1/2.5 inch
C-Mount
2592 pixel × 1944 pixel
2.20 µm × 2.20 µm
12 bits
14 fps (@ 2592 × 1944 pixel)
external trigger, free-run, Ethernet connection
Gigabit Ethernet
4.3. Setup Meteor
The Stereo Polarimeter was installed at the bow of the research vessel METEOR7 . The rack of the
3 m long stereo basis was mounted on top of the ACFT 8 box. The height from the polarimeter
boxes to the water surface was 8.9 m. Hence the length of the line of sight was 11.14 m, because
the polarimeter was tilted by 37° to the water surface normal. The line of sight is an imaginary
3 TAMRON M118FM16,
http://www.tamron.eu/en/cctv/cctv-single/cctvproduct/m118fm16-wlock-118-16mm-f14-c-mount-3.html
4 Schneider Fil Pol/CIR 25,5-MRC, http://www.schneiderkreuznach.com/fileadmin/user_upload/bu_industrial_
solutions/industriefilter/Polarizer/IF_Polarizer.pdf
Fil 022/25,5-MRC, http://www.schneiderkreuznach.com/fileadmin/user_upload/bu_industrial_
solutions/industriefilter/Color/IF_Color_Filter.pdf
6 CalflexX, http://www.opticsbalzers.com/data/tmp/1383303693_OBA%20010%20PE.pdf
7 http://www.ifm.zmaw.de/fileadmin/files/leitstelle/meteor/METEORvirtuell/index.html,
http://www.ifm.zmaw.de/fileadmin/files/leitstelle/meteor/M90_M93/M91-SCR.pdf
8 Active Controlled Flux Technique, see Schimpf et al. (2011)
5 Schneider
28
a
4. Experiments and Setup
b
Aperture
Optics
Pol.-Filter
Yellow-Filter
IR-Blocking
Filter
Figure 4.2.: Setup of the Polarimeter: a three cameras with different polarisation filters (0°,60°,120°) aligned
parallel next to each other. b optical components of each camera: optics with aperture, polarisation filter,
yellow filter and an IR-blocking filter, which is used as front window of the box.
straight line from the center of the camera to the mean water surface. A sketch of the setup on
the Meteor and a photo are given in Fig. 4.3. Detailed information about the proportions and
configurations is subsumed in Tab. 4.3. A summary of all measurements with the corresponding conditions during the Meteor 91 cruise is given in Tab. 4.1.
a
b
Figure 4.3.: a Photo (taken by Daniel Kiefhaber) and b Sketch of the Setup of the Stereo Polarimeter at the
Meteor. The polarimeter was fixed with a X95-rack to face the water surface under an angle of 37° to the
water surface normal
For calibrating the stereo system a set of chessboard pictures was taken. This was done by
turning the Stereo Polarimeter such that it was facing towards the ship.
4.4. Experiments in Hamburg
29
Table 4.3.: Specifications of the Setup at the Meteor with the camera configuration used. (The real image
size is calculated as a plane parallel to the image sensor at the distance of the line of sight.)
Focal length
Aperture
Binning
Resulting pixel pitch
Resulting resolution
Maximum frame rate
Distance Ship - Water surface
Length of the line of sight
Real image size
Real resolution
16 mm
1.4
1x1
2.20 µm x 2.20 µm
2592 pixel x 1944 pixel
14 fps
8.9 m
11.14 m
3.97 m x 2.97 m
1.53 mm/pixel
4.4. Experiments in Hamburg
The experiments in Hamburg were conducted at the Hamburgische Schiffsbau-Versuchsanstalt9
on the 12th and 13th of August 2013. At the small towing tank at the HSVA only one polarimeter
box was installed, since it was necessary to measure the slope of the waves. In this experiment
the waves where not driven by wind, but generated by a wave generator. Different wave spectra
were generated. A wave wire was installed at the tank, which is useful for comparison. Table 4.4
shows all measurements and conditions of the experiment.
Table 4.4.: Overview of all measurements and conditions at the Hamburgische Schiffsbau-Versuchsanstalt.
Date
Measurement Name
Generated Waves
Acquisition
Frequency
Recording
Time
12.08.2013
12.08.2013
12.08.2013
12.08.2013
12.08.2013
12.08.2013
12.08.2013
12.08.2013
13.08.2013
13.08.2013
13.08.2013
13.08.2013
13.08.2013
13.08.2013
13.08.2013
13.08.2013
13.08.2013
13.08.2013
13.08.2013
HSVA1
HSVA2
HSVA3
HSVA4
HSVA5
HSVA6
HSVA7
HSVA8
HSVA9
HSVA10
HSVA11
HSVA12
HSVA13
HSVA14
HSVA15
HSVA16
HSVA17
HSVA18
HSVA19
Test measurements
Test measurements
Monochromatic
Monochromatic
Monochromatic
Monochromatic
Monochromatic
Monochromatic
Monochromatic
Monochromatic
Monochromatic
Monochromatic
Monochromatic
Monochromatic
Monochromatic
Continuous spectrum
Continuous spectrum
Continuous spectrum
Continuous spectrum
free running
free running
10 Hz
10 Hz
10 Hz
10 Hz
25 Hz
25 Hz
25 Hz
25 Hz
25 Hz
25 Hz
25 Hz
25 Hz
25 Hz
25 Hz
25 Hz
1 × 100 s
1 × 100 s
2 × 100 s
2 × 100 s
6 × 40 s
10 × 40 s
1 × 120 s
4 × 120 s
1 × 120 s
2 × 120 s
2 × 120 s
2 × 120 s
3 × 120 s
3 × 120 s
2 × 120 s
9 HSVA: http://www.hsva.de/
30
4. Experiments and Setup
4.5. Setup Hamburg
For measurements at the Hamburgische Schiffsbau-Versuchsanstalt (HSVA) only the polarimeter function of the Stereo Polarimeter was necessary. To fit the needs at the HSVA, the components of one box were changed slightly. To achieve a larger measurement area a TAMRON lens10
with 8 mm focal length was installed on each camera. The yellow filter was replaced by a red
filter11 and as a result the bandwidth of the incoming light was limited to 600 nm–740 nm. To
increase the sensitivity of the cameras they were operated in binning mode. This means that
4 × 4 pixel were collected together. Accordingly, the resulting image has a smaller resolution,
which was sufficient at the HSVA because the pixel area on the water surface was still smaller
than the smallest wave length (see Sec. 3.5). Apart from that, the cameras could be operated
with a higher acquisition rate (up to max. 30 fps). Detailed information about the setup is given
in Tab. 4.5. The cameras were triggered externally with a function generator, except for the free
running mode, where cameras use their internal trigger.
Table 4.5.: Specifications of the Setup at the Hamburgische Schiffsbau-Versuchsanstalt with the camera
configuration used. (The real image size is calculated as a plane parallel to the image sensor at the distance
of the line of sight.)
Focal length
Aperture
Binning
Resulting pixel pitch
Resulting resolution
Maximum Frame Rate
Length of the line of sight
Real image size
Real resolution
8 mm
1.4
4x4
8.80 µm x 8.80 µm
648 pixel x 486 pixel
30 fps
4.00 m
2.85 m x 2.13 m
4.4 mm/pixel
4.5.1. Setup at the HSVA
The polarimeter was operated at the small towing tank of the Hamburgische Schiffsbau-Versuchsanstalt (HSVA). The size of the tank can be seen in Fig. 4.4, where the positions of the
wave generator, the wave wire and the polarimeter are marked. The polarimeter was mounted
on a ladder such that it faces the water surface normal under an angle of 37°. Fig. 4.5a visualizes
the geometry of the setup schematically. The polarimeter box is visible in the upper left corner
at the top of the ladder in Fig. 4.5b.
At the Hamburgische Schiffsbau-Versuchsanstalt two problems arose in contrast to the open
field measurements (see Sec. 3.5). The first problem was how to achieve a proper unpolarized
illumination, because there was not enough light for the cameras. The second one are the
reflections from the ground of the tank. It is important that only reflections from the water
surface were seen by the cameras.
Unpolarized Illumination The light that is reflected by the water surface to the polarimeter
must be unpolarized before it is reflected. Hence two spotlights, each made out of 140
10 TAMRON M118FM08,
http://www.tamron.eu/en/cctv/cctv-single/cctvproduct/m118fm08-wlock-118-8mm-f14-c-mount-2.html
R-60
Red
M25.5x0.5
Threaded,
http://www.edmundoptics.com/optics/optical-filters/
color-dichroic-filters/mounted-color-filters/46-542
11 Hoya
4.5. Setup Hamburg
31
Figure 4.4.: Sketch of the Polarimeter position at the Small Towing Tank at the HSVA. Source: http://www.
hsva.de/
a
b
Ceiling
2,5 m
3m
37°
5m
Figure 4.5.: a Sketch of the polarimeter setup at the small towing tank at the HSVA. On the right hand side
the light source is illuminating the ceiling. The polarimeter sees the reflection of the ceiling at the water
surface. b Photo of the polarimeter setup.
high-performance LEDs12 (λpeak = 630 nm), were built for illumination. The degree of
linear polarisation of the light reflected by the ceiling (see Fig. 4.5) was checked on-site
and it was just a few percent.
Only reflections from the water surface All light that is gathered by the cameras must come
from the water surface. Otherwise the assumption that no upwelling light is present (see
Sec. 3.5) is not justified any more. When light hits the water surface nearly perpendicularly only a fraction of about 2% is reflected. Thus, almost all light is transmitted into the
water. Therefore it must be guaranteed that no light is reflected by the bottom of the tank.
At the beginning of the experiment the ground was visible, because the self absorption of
the water was not high enough. To enhance the absorption of the water the dye Patent
Blue V13 was added. As it can be seen in Fig. 4.6 the absorption bandwidth overlaps with
the bandwidth of the LEDs from the spotlight. Hence, only a small amount of dye was
necessary to prevent all reflections of the ground. In total, an amount of 40 g was added
to the towing tank with a capacity of about 2.7 × 106 l of water.
12 Cree XP-E red, http://www.cree.com/~/media/Files/Cree/LED%20Components%20and%20Modules/XLamp/
Data%20and%20Binning/XLampXPE.pdf
13 Patent blue V sodium salt (E131)
32
4. Experiments and Setup
100
Patent Blue
60
3.0
40
2.0
20
1.0
0
400
0
450
500
550
600
650
700
Extinction Patent Blue (a.u.)
Relative Emission of LED (%)
LED
80
750
Wave Length (nm)
Figure 4.6.: Emission spectrum of the red LEDs used and absorption spectrum of Patent Blue
V. The absorption is almost the highest at the wavelengths where the LEDs emit. Source of the
Data: LEDs http://www.cree.com/~/media/Files/Cree/LED%20Components%20and%20Modules/XLamp/
Data%20and%20Binning/XLampXPE.pdf, Patent Blue V http://www.zum.de/Faecher/Ch/BW/smarties.shtml
5. Calibration
The calibration is one of the most important parts of my work, because it is crucial for the
success of the measurements.
5.1. Coordinate Systems
In photogrammetry the correct definition of coordinate systems is important, because it mainly
deals with transformations from one coordinate system (e.g. the camera) to another one (e.g.
the real world). All coordinate systems used are listed and explained below.
5.1.1. Pixel Coordinate Frame
The pixel coordinate frame is the coordinate system of a digital image and consists of columns
and rows (u, v). The origin (u = 0, v = 0) is in the upper-left corner and is defined with the
positive y-axis pointing downwards. A digital image can also be interpreted as a matrix, which
becomes multidimensional if there is more information, like color or image sequences.
5.1.2. Image Coordinate Frame
The image coordinate frame (often also: camera coordinate/reference frame) determines the
camera reference system. First of all we will define it as a flat Cartesian coordinate system
(x 0 , y 0 ) (see Fig. 5.1). This coordinate frame is fixed to reference points of the camera like the
pixel coordinate frame. The difference to the pixel coordinate frame is that it’s origin is in the
center of the image and the coordinates are continuous. The origin of this coordinate system is
also called principal point (see. Sec. 5.3).
This coordinate system can be extended by a z 0 -axis to become a right-handed three dimensional coordinate system. The origin of this three dimensional coordinate system is the projection center of the camera O 0 .
Figure 5.1.: Sketch of the image coordinate system. The physical imaging process is taking place at the
negative B 1 . In photogrammetry it is easier to think of the positive definition B 2 , since the vector x 0 is pointing
to the point P in world coordinates. Source: Luhmann (2010, p. 25, Fig. 2.2)
34
5. Calibration
With the extension of the z’-axis it is possible to define a vector x 0 :
 0   0 
x
x
0
0  

y0 
x = y
=
0
z
−f
(5.1)
where f is the focal length of the camera. z 0 component is negative because the image coordinate system is right-handed. This vector x 0 points to the point P , which is defined in world
coordinates (see Sec. 5.1.3). x 0 , y 0 and z 0 are continuous coordinates, but are related to the pixel
coordinate frame via the pixel size, pixel position and focal length (Sec. 5.1.1).
Normalized Coordinates
Normalized coordinates can be seen as a projection of a pinhole camera. A pinhole camera
is a simple camera model without aberrations, like the one we have used in the previous section. If P 0 is a point defined in the camera reference frame, with the vector P 0 = [x 0 , y 0 , z 0 ]t , the
normalized coordinates are defined as:

  0 0 
xn
x /z
x n =  y n  =  y 0 /z 0 
(5.2)
1
1
The normalized coordinates are therefore independent of any properties of the camera.
5.1.3. World Coordinate Frame
The world coordinate frame (often also: object coordinate frame) is a Cartesian coordinate system (X , Y , Z ), which is defined by reference points at an object in the real world. The aim of
photogrammetry is to connect the world coordinate frame to the camera frame and therefore
to the pixel coordinates. The definition of the origin, the coordinate orientation and the scaling is often called datum. The transformation from world coordinates to image coordinates is
visualized in Fig. 5.3.
5.2. Coordinate Transformations
Transformations are needed to change between two reference systems. Thus finding the right
transformation is one of the key points of photogrammetry. Similarity, affine, polynomial and
bilinear transformations are well discussed and presented in many books about this topic, e.g.
Luhmann (2010), Forsyth and Ponce (2002), Szeliski (2011), Hartley and Zisserman (2003). The
most important transformation, the 2D projective transformation, is described in the next section.
5.2.1. 2D Projective Transformation
A plane (2D) projective transformation or also plane homography transforms one plane (2D)
coordinate system to another one, where each ray of the projection crosses the projection center.
The transformation rule is given as:
a0 + a1 x + a2 y
1 + c1 x + c2 y
b0 + b1 x + b2 y
Y =
1 + c1 x + c2 y
X=
(5.3)
5.2. Coordinate Transformations
35
x, y
projection center
X, Y
Figure 5.2.: Plane projective transformation from one plane (x, y ) to another one ( X , Y ) or vice versa
or in matrix notation with normalized vectors:
 
X
a1
X =  Y  =  b1
1
c1

a2
b2
c2
 

a0
x
b0  ·  y  = H · x
1
1
(5.4)
Here the coordinates (X , Y ) and (x, y) can be of any plane euclidean coordinate system. For
example the projective transformation can be used to transform from one camera system to
another one or from one camera system to the water surface.
As it can be seen in eq. (5.3) and eq. (5.4) the projective transformation has 8 degrees of
freedom (DOF). The equation can be solved by linearizing Eq. (5.3) by multiplying with the
denominator.
a 0 + a 1 x + a 2 y − X − c 1 x X − c 2 y X =0
b 0 + b 1 x + b 2 y − Y − c 1 xY − c 2 yY =0
(5.5)
The parameters of the homography can be solved either with a least-squares fit or with SVD
(single value decomposition). For a detailed description of the SVD see Trucco and Verri (1998).
A special property of the plane projective transformation is that double ratios are conserved,
which is also known as the intercept theorem.
5.2.2. 3D Transformations
A 3D transformation from one 3D coordinate system to another one is in general given by a
displacement vector X 0 , a scaling factor m and a rotation matrix R(ω, ϕ, κ). (More information
about rotation matrices can be found in Sec. A.1)
X = X 0 + m · R · x0



 
X
X0
r 11
 Y  =  Y0  + m ·  r 21
r 31
Z
Z0
r 12
r 22
r 32
  0 
r 13
x
r 23  ·  y 0 
r 33
z0
(5.6)
Figure 5.3 shows a sketch of a transformation from world coordinates to image coordinates.
Equation (5.6) can be seen as starting point to derive the main equation of photogrammetry,
the so called collinearity equation. As we have seen in Sec. 5.1.2 the camera coordinate system
36
5. Calibration
Figure 5.3.: Sketch of the coordinate transformation from world coordinates to the image coordinates.
Source: Luhmann (2010, p. 237, Fig. 4.4)
is primarily two dimensional (x 0 , y 0 ). The imaging process therefore describes the transformation from a world coordinate point (X , Y , Z ) to the image coordinates (x 0 , y 0 ). This can be done
by rewriting Eq. (5.6):
x0 =
y0 =
a 00 + a 10 X + a 20 Y + a 30 Z
1 + c 10 X + c 20 Y + c 30 Z
b 00 + b 10 X + b 20 Y + b 30 Z
1 + c 10 X + c 20 Y + c 30 Z
(5.7)
This equation is the three dimensional projection equation or collinearity equation. The
parameters a 0 , b 0 , c 0 can be calculated from the translation vector X 0 and the rotation matrix
R. Since we transform from a 3D coordinate system to a 2D coordinate system, the distance
information in z-direction is lost completely.
5.3. Camera Matrix
With the camera matrix it is possible to transform normalized coordinates (see Sec.5.1.2) to the
pixel coordinate frame (see Sec.5.1.1). The camera matrix contains the intrinsic parameters of
the camera without the distortion. There are five independent intrinsic camera parameters,
which are the focal length ( f x , f y ), the principal point (cc x , cc y ) and the shear parameter αc .
The focal length is split into two parameters, because the ratio f y / f x = (1 + m) gives the scaling
difference, if the pixels are not squared. (For squared pixels m = 0). The shear parameter αc is
important if the pixels are not rectangular. The principal point (cc x , cc y ) gives the penetration
5.4. Imaging Optics and Optical Aberration
37
point of the optical axis at the sensor in pixel coordinates. The camera matrix K is then:
 
1
αc
cc x
fx
K =  0 (1 + m) cc y  ·  0
0
0
1
0

0
fx
0
 
0
fx
0 = 0
1
0
αc · f x
fy
0

cc x
cc y 
1
(5.8)
The combination of the camera matrix and the 3D-Transformation equation 5.6 then gives
the total projection matrix P , which describes the transformation from the world reference
frame to the pixel coordinate frame. The projection matrix P has 11 DOF in total, 5 DOF from
the intrinsic parameters (K -Matrix) and 6 DOF from the extrinsic parameters (3 rotation angles
and 3 components of the translation vector X 0 ).
fx

0
P = K · R · [I | − X 0 ] =
0

αc · f x
fy
0
 
cc x
r 11


cc y · r 21
1
r 31
r 12
r 22
r 32
 

r 13
1 0 0 −X 0
r 23  ·  0 1 0 −Y0  (5.9)
r 33
0 0 1 −Z0
Here a new notation is introduced (in geometry often called homogeneous coordinates) to simplify the subtraction of the translation vector X 0 . Thus the projection matrix P is a 3×4 dimensional matrix and therefore the 3 dimensional vector X must be changed into to a 4 dimensional vector by adding 1 as fourth component. The transformation from object coordinates X
to normalized camera coordinates x 0 is then given as:
0
x =P ·
·
X
1
¸
= K · R · (X − X 0 )
(5.10)
5.4. Imaging Optics and Optical Aberration
There are many different optical aberrations, but most of them are corrected by a good lens.
Detailed information about the different aberrations can be found in Jähne et al. (1999), Jähne
(2005), Luhmann (2010). In our setup spherical aberrations, astigmatism and coma aberrations were corrected by the lens very well and therefore they where not taken into account for
the calibration procedure. The chromatic aberrations also do not play a major role because
the visible spectrum was limited to a small bandwidth with optical filters. Hence the most important aberration for calibration is the distortion of the lens. Additionally, there exist some
radiometric properties as well that cannot be corrected by a good lens, like the field darkening
or the dark noise. Therefore a correction these parameters is important for high quality images.
5.4.1. Field darkening
The intensity of an incoming bundle of light rays is reduced by a factor of cos4 θ, θ is the angle
between the incoming light ray and the optical axis of the lens.
I Sensor = I Incoming · cos4 θ
(5.11)
There is also an effect of the aperture, which is discussed in Jähne et al. (1999) and Jähne (2005).
The factor of cos4 θ is composed of a factor of cos2 θ from the inverse square law (the reduction
of the incoming cross-section for the incoming ray bundle), a factor of cos θ from passing the
lenses in a sloped way and a factor of cos θ by light rays hitting the sensor not perpendicular.
This effect can easily be corrected by taking a mean picture of the integrating sphere, which
produces an evenly distributed light field and normalizing it to one for the highest intensity in
a picture. A calibration picture and the setup in front of the integrating sphere can be seen in
Fig. 5.4.
38
5. Calibration
b
a
Figure 5.4.: a Setup for the radiometric calibration of the cameras. The cameras are placed parallel in front
of the integrating sphere. b The result of a mean image with an 8 mm optics.
5.4.2. Dark Noise
Due to thermal effects the sensor becomes exposed even when the camera is put into complete
darkness. This noise is called dark noise or amplifier noise because it is provoked by the thermal
stimulation of the "reading"-electronics of the sensor. Each pixel of a CMOS-sensor has its own
amplifier. All these amplifiers have a slightly different offset. The CMOS sensor therefore has
a fixed pattern noise, which comes from the offset differences. The fixed pattern noise occurs
especially in the dark parts of an image, since the amplifiers also becomes thermal stimulated.
This fixed pattern noise can easily be corrected by subtracting a dark image after the acquisition
of an image. The dark image is generated by taking the mean of a long time series of images
which are acquired when the camera is completely covered, so that no light can enter the lens.
A typical dark image can be seen in Fig. 5.5. A normalized image with correction of the field
Figure 5.5.: Example of the dark image of a Basler acA2500-14gm
darkening and of the dark noise is computed like this:
I norm =
I in − I D
I M − ID
(5.12)
here I in is the image input, I D is the dark image, I M is an image of the field darkening and I norm
is the normalized image.
5.4. Imaging Optics and Optical Aberration
39
5.4.3. Depth-of-field
dlens
To get a sharp image of an object it is not only necessary to adjust the focus at the correct
length, but as well to adjust the aperture so that the whole measurement range is in the depthof-field. In our case the aperture was fixed to n f = 1.4 and the focus was set at 11 m at the
Meteor and at 5 m in Hamburg. The calculation of the measurement range (depth-of-field),
where the image is not blurred, is done in many textbooks like in Haferkorn (1994). The depthof-field ∆d = d f − d n is defined as the difference between the near point d n and far point d f ,
where the image is still focused. The relationship between these points is depicted in Fig. 5.6.
Δd
ε
f
dn
g
df
Figure 5.6.: Sketch of a lens with an aperture (opening diameter dlens ), where all important distances for the
depth-of-focus ∆d calculation are depicted.
Near and far point are computed like this:
dn =
1
1
g
+
df =
1
dh
1
1
g
− d1h
(5.13)
where g is the distance to the focused object and d h is the hyperfocal distance. These equations
are valid only with the assumption of g >> f , which is given for all of our setups. The hyperfocal
distance is defined as:
dh =
f2
+f
nf ·ε
(5.14)
f
where f is the focal length, n f = dlens is the f-number, with the opening diameter of the aperture
d lens and ε is the diameter of the blur disk (in general the pixel size). In our case the hyperfocal
length for f = 16 mm, n f = 1.4 and ε = 2.2 µm (size of one pixel) is d h = 83.13 m and for f =
8 mm, n f = 1.4 and ε = 8.8 µm (size of one pixel with binning 4x4) is d h = 5.203 m. Table 5.1
gives an overview of the different distances and depth-of-field for the different experimental
setups.
40
5. Calibration
Table 5.1.: Calculated depth-of-field ∆d and important distances for the two different setups
Focus f
F-Stop n f
Pixel size ε
Object distance g
Hyperfocal length dh
Near Point dn
Far Point d f
Depth of Field ∆d
Setup Meteor
Setup Hamburg
16.0 mm
1.4
2.2 µm
11.14 m
83.13 m
9.82 m
12.86 m
3.04 m
8.0 mm
1.4
8.8 µm
5.00 m
5.20 m
2.55 m
128.27 m
125.72 m
5.4.4. Distortion
For most lenses, distortion is the most significant optical aberration. For the functionality of the
polarimeter three images have to be mapped. In order to avoid errors the distortion correction
is crucial for a correct mapping. Hence a major part of this thesis was to quantify the distortion
of the cameras.
The primary part of distortion is the radial symmetric distortion, which can be classified
into two different regimes, depending on the sign of the distortion parameter: barrel distortion
(negative sign) and pincushion distortion (positive sign).
a
b
Figure 5.7.: Illustration of a barrel distortion and b pincushion distortion
There is a lot of literature about this topic like Zhang (2000), Heikkilä and Silven (1997), Tsai
(1987) each one with its own calibration parameters and procedure. For the optimization of
the parameters we used the Camera Calibration Toolbox for Matlab (Bouguet, 2008) and for
consistence we use the same notation.
Radial-symmetric distortion
The radial symmetric distortion has the most significant effect on the images. The origin of the
radial distortion normally is the principal point. Therefore it is important to use the image coordinates (see Sec. 5.1.2) or the normalized coordinates (see Sec. 5.1.2) for further calculations.
Normalized coordinates can as well be achieved with pixel coordinates by multiplying them
with the inverse of the camera matrix. We will use the definition of the normalized coordinates
5.4. Imaging Optics and Optical Aberration
41
x n Eq. (5.2), which were defined as:
·
xn =
xn
yn
¸
·
=
x 0 /z 0
y 0 /z 0
¸
(5.15)
With the normalized coordinates we can define a radius from the origin as:
r 2 = x n2 + y n2
(5.16)
For our purpose it was sufficient to take just two radial distortion coefficients (k 1 , k 2 ). With
these definitions we get for the radial distortion vector x rad :
x rad = (1 + k 1 r 2 + k 2 r 4 )x n
(5.17)
The effect of the radial distortion and the distortion curve can be seen in Fig. 5.8. Fig. 5.8a
visualizes the distortion coefficient of Eq. (5.17) in front of x n over the normalized radius r .
The barrel distortion of this lens is clearly visible, since the first coefficient k 1 is negative and
the radial distortion factor reaches below one. Fig. 5.8b shows the radial distortion map. The
cross indicates the middle of the image and circle indicates the principal point. The arrows
point from the ideal position to the distorted position of the image points. The contours show
the shift in pixels.
a
b
Radial symmetric distortion coefficent
Radial Component of the Distortion Model
4
12 10
50
8
2
4
0.98
6
12 14
0
6
1
150
200
8
6
0.94
4
2
4
6
250
0.92
2
6
10
2
4
1 450
0.8
0
12
0.4
0.6
Normalized Radius r
8
0.2
6
0
10
400
0.86
4
350
0.88
2
300
0.9
8
Rad.−sym. Distortion
8
0.96
10
2
100
100
200
300
400
500
600
Figure 5.8.: a Radial symmetric distortion of a 8.00 mm optics as function of the normalized radius r .
b Influence of the radial symmetric distortion on an image
Radial-asymmetric or tangential Distortion
The tangential distortion comes mainly from the misalignment of the lenses in the optics. Thus,
for well-adjusted optics the tangential distortion parameters (k 3 , k 4 ) is secondary. For our purpose the tangential distortion was taken into account although it was rather small. The tangential distortion vector x tan is given as:
·
x tan =
2k 3 · x n · y n + k 4 · (r 2 + 2x n2 )
k 3 · (r 2 + 2y n2 ) + 2k 4 · x n · y n
¸
(5.18)
42
5. Calibration
Total distortion model
The total distortion is then described by the sum of the radial distortion x rad and the tangential
one x tan . Another term could be added, which corrects for the affinity and shearing, but this is
already included in the camera matrix K , see Eq. (5.8). The total distortion is therefore:
·
x tot = x rad + x tan
¸
·
¸
·
¸
x px
x tot
x rad + x tan
=K ·
=K ·
1
1
1
(5.19)
where x px is the total distortion vector x tot converted using the pixel coordinates of the image
to the camera matrix K . The effect of the total distortion model and of the tangential distortion
can be seen in Fig. 5.9. As seen in the previous distortion map, the cross indicates the middle
of the image and circle indicates the principal point. The arrows point from the ideal position
to the distorted position of the image points. The contours show the shift in pixel. Hence a
correction of the total distortion model would displace the image points from the tip to the
shaft of the arrow.
a
b
Complete Distortion Model
Tangential Component of the Distortion Model
10
2
200
8
0.5
0.3
0.4
0.2
0.1
0.1
0.2
4
150
6
0.3
8
4
6
250
2
0.4
200
12 14
8
2
2
100
150
300
300
350
4
2
8
6
2
10
0.
2
12
0.3
8
400
0.1
0.6
10
0.4
1
0.
0.3
0.5
350
0.2
400
4
450
0
100
200
300
400
500
600
0
6
450
4
4
0.7
100
250
50
0.6
50
6
0
12 1
0.1
6
0
0
100
200
300
400
500
600
Figure 5.9.: Influence of a the tangential distortion and b the complete distortion, with radial and tangential
distortion, on an image
To apply the total distortion model onto images, the shift of each pixel is calculated. Because
the shift might not be on the same regular pattern as the pixel coordinates, some interpolation
is necessary. The standard way is to use a linear interpolation, with four pixels as basis.
With the inclusion of the total distortion model in the transformation from world coordinates
to pixel coordinates, the system becomes non-linear. Thus, it is not possible any longer to use
a linear technique (like the Direct Linear Transformation DLT) to receive the parameters. The
minimization process must be done by iteration, for example with a Levenberg–Marquardt algorithm. For our purpose we used the iterative optimization of the Camera Calibration Toolbox
for Matlab (Bouguet, 2008) to obtain all 9 intrinsic parameters (5 from the camera matrix K and
4 from the distortion model). A table with all the parameters for all cameras can be seen in Tab.
A.1 and Tab. A.2.
5.5. Calibration in Hanau
The normal calibration procedure with the Camera Calibration Toolbox for Matlab (Bouguet,
2008) requires taking about 10 to 15 pictures of a chessboard target in different orientations.
5.5. Calibration in Hanau
43
To obtain a good calibration result the chessboard should cover almost the full image size in
the range of the depth-of-field. As shown in Tab. 4.3 the cameras were focused at a length of
11.14 m and the image size in the real world was X = 3.97 m and Y = 2.97 m. Since we printed
the chessboard on a plotter, the maximum size was DinA0 and hence it was not possible to
cover the whole field with images of a chessboard at the focused distance. Prof. Jähne came
up with the idea to build our own target with the correct dimensions, which was done at the
Studiozentrum of AEON1 in Hanau.
5.5.1. Target
The target is made out of three aluminum composite sandwich structure panels with a size of
3.1 m × 1.5 m. Black circles (made out of Metal Velvet2 ) were glued in a regular pattern (∆x =
∆y = 50.1 cm) at the top of the panels. At the middle panel a minor pattern (∆x = ∆y = 16.7 cm)
with smaller circles is attached at the center. In each circle a white paper, which has a known
emission characteristic, is applied, so that this target can also be used for radiometric calibration. The three panels were standing upright on the floor and were held by an aluminum rack.
Fig. A.1 shows a 2D-graph of the target and Fig. 5.10 is a photo of it.
Figure 5.10.: Photo of the target in Hanau for the geometric calibration. (Large Pattern: ∆x = ∆y = 50.1 cm,
Small Pattern: ∆x = ∆y = 16.7 cm)
5.5.2. Linear Translation Axis
One of the most difficult parts in camera calibration is to find the principal point of the camera.
Therefore Prof. Jähne had the idea, whenever a camera is moving onto a target in a straight line
the projection of the points must move on a straight line away from the principal point. Therefore the cameras were installed on a 3 m long linear translation axis, to move them on a straight
line towards the target. The linear translation axis system consists of a Parker Compax33 control box, a brushless servo motor and a 3 m long linear axis4 . A photo of the linear axis with the
cameras and a sketch of the whole setup in Hanau is depicted in Fig. 5.11.
1 AEON Verlag & Studio GmbH & Co. KG, http://www.aeon.de
2 Ultra-Diffusive
Light-Absorbing Foil UV, VIS and
IR, http://www.acktar.com/category/products/
lights-absorbing-foils/ultra-diffusive
3 Parker C3S025V2F11I12T11M00, http://www.parker.com
4 Motor: Parker SMH8260038142ID65A7, Linear-axis: Parker LCB060SG03000SRN, Gearbox: PTN080-004S7
44
5. Calibration
b
a
L
11m
3m
Laser-Pointer
3m
Figure 5.11.: a Photo and b sketch of the measurement setup in Hanau with the camera box on the linear
axis facing at the target
Wobble Correction
Due to the large distance from camera to target, even tiny changes in the camera’s viewing
angle had notable effects on the position of the target in the image. Since the camera was
supposed to move in a straight line towards the target, this wobbling had to be corrected. The
laser pointer shown in Fig. 5.11 provided a stable reference point that had a fixed position in
the camera image, but was subject to the same wobbling. Thus, by tracking the position of
the laser pointer on the target, it is possible to determine the changes in the viewing angle. In
practice, it was not trivial to determine the position of the laser spot, the cheap laser diode that
was used was badly focused and had an irregular shape. Therefore, virtual points were used. A
typical calibration image sequence consists of 59 images from different positions of the linear
translation stage. The parameters for the wobble correction then were determined like this:
Taking a squared area
of interest (AOI) where
4 circles are present.
Detecting the middle
of each circle
1. An area of interest (AOI) was selected such that the
same 4 circles are visible in all images of the sequence.
2. The centers of the circles were detected.
Generating an artificial
image (same size as
the AOI)with a circle in
the middle of the
image
3. An artificial image with the same size as the AOI was
generated with a circle in the middle (fixed point in
the camera).
Transforming the
original image so that
the middle points are
at the edges of the
squared AOI
4. Using a perspective transformation, the image can be
warped in a way that the 4 circle centers are the corners of the warped image.
5. The same transformation is applied to the artificial
image (circle → ellipse).
501mm
y
501mm
Make the same
transformation with
the artificial image,
scale it and measure
the middle of the
ellipse
x
Figure 5.12.: Wobble Correction
6. The transformed artificial image is scaled to the same
dimensions as the square pattern of the target (∆x =
∆y = 501 mm).
7. The center of the ellipse is determined, which represents the fix point at the target.
5.6. Polarization Filter Calibration
45
After this was done for every position of the linear axis a line was fitted to the X − and Y −data
as a function of the position. Due to the intercept theorem (projective transformation, Sec.
5.2.1) the projection of a point must move on a line when the camera is moving on a straight
line towards this point. The correction factors (X corr , Ycorr ) were calculated by subtracting the
X −, Y −values of the straight line from the X −, Y −data. The wobbling and the line fit is shown
in Fig. 5.13.
a
b
10
60
50
5
40
30
mm
Deviation in X−direction
Deviation in Y−direction
10
0
Correction [mm]
0
20
−5
−10
−10
−20
−15
−30
−40
0
Correction in X−direction
Correction in Y−direction
500
1000
1500
2000
Distance on the Linear Axis [mm]
2500
3000
−20
0
500
1000
1500
2000
Distance on the Linear Axis [mm]
2500
3000
Figure 5.13.: a Wobbling of the linear axis in X- and Y-direction for every step of the linear axis. For the
correction a linear fit is made. b Correction of the wobbling by subtracting the linear fit.
5.5.3. Detection of the Target
The implementation of the target detection was done in Heurisko® and in Matlab® . As it is
shown in Fig. 5.11, the cameras are always facing head-on the target. This has one big advantage and one big disadvantage. The advantage is that circles always remain circles and do not
transform to an ellipse. Thus, it was possible to use the Matlab® -Function imfindcircles.
Since the wobbling of the linear axis was very prominent in the images, it was not possible
any more to use the first idea with the principal point. Therefore, we tried to use the Camera
Calibration Toolbox for Matlab on this data. Because the target was always aligned parallel to
the image sensor, it was not possible to use it as a planar target. Zhang (1998) showed that the
standard calibration procedure with planar targets is not working any more if the planar target
is always facing the camera with the same orientation.
The solution of this problem was to generate a 3D point cloud out of the real world coordinates of the planar target and the known translation of the linear axis. For every position of
the linear axis the real world coordinates of the target were corrected as well by shifting the
X −, Y −coordinates with the wobbling parameters (X corr , Ycorr ). With this approach it was possible to calibrate all cameras with the 16 mm-lens configuration. The intrinsic parameters of
the calibration can be seen in Tab. A.2.
5.6. Polarization Filter Calibration
As mentioned in Sec. 3.1, the polarization filters of one polarimeter have to be aligned at θ = 0°,
60° and 120° to cover the full range of 180°. The calibration of the orientation of the polarization filter was done with the integrating sphere with a polarization filter attached. Since the
polarization filter of the integrating sphere had no degree scale, it was necessary to calibrate
the orientation of this polarization filter as well. The knowledge of Sec. 3.2, that reflected light
46
5. Calibration
is fully horizontally polarized at the Brewster angle, was used to do the gauging of the polarization filter. The reflection at the Brewster angle of the integrating sphere was acquired with a
camera. A degree scale, determined by the radius of the polarization filter, was mounted on the
polarization filter and the zero point of the scale was set where the reflection was the brightest.
The zero point of the scale at the polarization filter of the integrating sphere is therefore set to
horizontal polarization.
For the rough adjustment of the polarization filter of the three cameras to 0°, 60° and 120° the
corresponding angle was set at the integrating sphere filter. Afterwards the filter in front of the
camera was turned to the darkest position. This means e.g. the 0°-filter is vertically polarizing,
since the zero position of the integrating sphere is horizontally aligned.
For the accurate measurement of the polarization filter adjustment the filter at the integrating sphere was turned from 0°−195° in 5° steps and pictures were taken at every position of the
polarization filter. The analyzer matrix A (see Sec. 3.1 and Eq. 3.3) for each box was determined
using a least squares fit of Eq. 5.20, where I k are the intensity from camera k and S i (i = 0, 1, 2)
are the calculated components of the Stokes vector (see. 2.2.2).



S0
I1
 I 2  = A −1 ·  S 1 
S2
I3

(5.20)
The Stokes-vector S is computed from the angle of the polarization filter θpol at the integrating
sphere like this:
 

S0
1
 S 1  =  cos(2θpol ) 
S2
sin(2θpol )

(5.21)
An example of the measured intensities and with the calibrated A-matrix and Stokes vector
calculated intensities can be seen in Fig. 5.14. The crosses are the measured values and the
continuous lines are the calculated values with Eq. (5.20). The polarization angle Φ is defined
by the polarization filter of the integrating sphere.
4
2
x 10
buf0A
buf0B
buf0C
Intensity
1.5
1
0.5
0
0
50
100
Polarisation Angle Φ
150
Figure 5.14.: Intensities of the different cameras with different polarization filters facing the polarization
filter of the integrating sphere. The crosses indicate the measured values and the continuous lines are the
calculated values with Eq. (5.20).
5.6. Polarization Filter Calibration
47
5.6.1. Test of the Polarization Filter Calibration
To verify the polarization filter calibration of the polarimeter, a special test was designed. A
sketch and a photo of the experimental setup is shown in Fig. 5.15.
The reflection of an integrating sphere (without polarization filter) at the water surface was
recorded for different incidence angles. The polarimeter was put on a rack with an angular
adjustment stand. For a precise measurement of the angle a spirit level was fixed on the top
of the box. The cameras were directly facing a water tank, where the ground was covered by
black fabric, so no upwelling light (see Sec. 3.5) could come from the bottom of the tank. The
integrating sphere was placed such that the reflection could be seen by the cameras.
a
b
Figure 5.15.: a Photo and b sketch of the polarimeter test. The integrating sphere is shining onto the water
surface without any filter attached to it. The camera is installed under a certain angle to the water surface
and is capturing the reflection of the integrating sphere.
The incidence angle was adjusted from 30° to 75° in steps of 5° and the degree of linear polarisation was computed for the reflection of the integrating sphere. For matching the images of
the cameras, a chessboard was put on the water surface for every condition. Since the Brewster
angle of 53° is passed, when the incidence angle is varied from 30° to 75°, the two parts (left and
right of the Brewster angle) of the DOLP curve Fig. 3.1 have to be taken into account. Below the
Brewster angle the left part and above the right part is important.
Since the reflection was not always at the same position in the image the angle of view α of
the camera has to be considered. The angle of view is depending on the optics and on the pixel
position and can be calculated like this:
µ
¶
y pos − cc y
αypos = arctan
(5.22)
f
Here αypos is the angle of view in Y at a certain position, y pos is the y-position of the reflection,
cc y is the principal point or middle of the image in y-direction and f is the focal length. The
same can be done in x-direction. The angle of view was computed for every position, by using
the center of gravity of the reflection as pixel position.
Figure 5.16 shows the result of the polarization filter test, where Fig. 5.16a depicts the relation
DOLP - incidence angle with and without the angle of view correction. Fig. 5.16 plots the quality
of the polarization filter test, where the measured angle is compared to the adjusted angle.
This test demonstrates as well that it is possible to use both parts of the DOLP curve Fig. 3.1
(above and below the Brewster angle) for the reconstruction of the incidence angle. In practice it is not really useful to use both parts, since it is not possible to distinguish whether the
Brewster angle is passed, like here in this experiment.
48
5. Calibration
b
a
75
1
70
0.9
Meassured Angle [°]
65
DOLP
0.8
0.7
0.6
60
55
50
45
40
0.5
0.4
30
Measured DOLP uncorr
Measured DOLP corr
DOLP calculated
35
40
45
50
55
60
Incidence angle θ [°]
35
65
70
75
30
30
35
40
45
50
55
60
Adjusted Angle [°]
65
70
75
Figure 5.16.: a Plot of degree of linear polarisation (DOLP) for different adjusted angles of the camera. The
blue line shows the result of the measurements when the angle of view and the position of the reflection is
not taken into consideration whereas in the green one, this effect is corrected. The red curve is a theoretically
calculated curve. b Relation between the measured angles and the adjusted angles.
6. Data Processing
The principle of the polarimeter data processing chain, from taking the picture of the 3 cameras
to the X-, Y-slope images, is shown in Fig. 6.1. To illustrate the data processing steps, an example set of images will be shown in some sections. The conversion from the slope distributions
to the height images is described in Sec. 6.6.
Acquisition
Radiometric
Correction
Geometric
Correction
Camera 1
Camera 1
Camera 1
Camera 2
Camera 2
Camera 2
Camera 3
Camera 3
Camera 3
Mapping
Camera
1
Camera
Camera23
Analyzer
Matrix
DOLP, Φ
X-Slope
Y-Slope
DOLP, Φ
X-Slope
Y-Slope
AnalyzerMatrix
11
21
31
12
22
32
13
23
33
Figure 6.1.: Figure Data Processing Block diagram
6.1. Data Acquisition
The three cameras of the polarimeter were acquired at the same time. The acquisition frequency and the resolution of the different measurements can be seen in Tab. 4.3 and Tab. 4.5.
An example set of 3 images from the measurements in Hamburg can be seen in Fig. 6.2.
The contrast of the images in Fig. 6.2 were adjusted to the same range.
6.2. Radiometric Correction
A two point correction was done for every acquired images by subtracting a dark image Fig. 5.5
(Sec. 5.4.2) and dividing by the field darkening images Fig. 5.4 (Sec. 5.4.1). With this correction
different offsets and sensibilities of every pixel were adjusted. Especially the correction of the
field darkening is very important for the polarimeter, because we want to compare different
intensities for every pixel.
50
6. Data Processing
a
b
c
Figure 6.2.: Example raw images acquired from the 3 different cameras of a polarimeter with the polarization
angle of a = 0° b = 60° c = 120°
6.3. Distortion Correction
Like the radiometric correction the distortion correction has to be done for every camera individually, since every lens and camera sensor is different adjusted. Especially the 8 mm optics
in the experiments at the HSVA have not negligible distortion, which can be seen in particular
at the edges of the images. The determination of the distortion parameters (two radial and two
tangential) is done like in Sec. 5.4.4 described. The parameters for every setup can be looked up
in Tab. A.1 and Tab. A.2. The distortion correction was done with the modified rect-routine of
the Camera Calibration Toolbox for Matlab (Bouguet, 2008). Because the distortion correction
is computationally extensive, the resulting images after the radiometric and distortion correction were saved for every data set. Therefore all further steps in the data processing chain could
start with corrected images.
6.4. Mapping of the Images
The three cameras look from a slightly different position and a different perspective on the water surface. Hence, it is not possible to map the different images by a simple translation. To
align the three images exactly pixel by pixel, two homographies (or projective transformations,
see Sec. 5.2.1) where calculated, to map the outer cameras to the middle camera. In the same
step the images were transformed to an orthogonal coordinate system. This is significant, because the cameras were looking under 37° at the water surface and hence the image section is
6.5. Calculation of the Slope Distribution
51
not rectangular. The rectification is as well a projective transformation (Sec. 5.2.1). At the HSVA
experiment, the parameters of the transformation are determined by an chessboard floating on
the water. The chessboard was as well important to calculate the dimensions of one pixel on
the water surface. Figure 6.3 shows an example result of the mapping of all cameras of one box
in RGB-colour coding.
Figure 6.3.: Example image of the mapping process. The three cameras are shown in RGB-coding.
6.5. Calculation of the Slope Distribution
After the mapping we get at every pixel position the three intensities, which were taken with
different polarization filter directions. These intensities can be converted with the calibrated
analyzer matrix to the three components of the Stokes vector, see Sec. 3.1 and Eq. (3.3). With
the components of the Stokes vector the orientation angle Φ and the degree of linear polarisation(DOLP) were calculated, see Sec. 2.2.2, Eq. (2.49) and Eq. (2.45). The non-linear transformation from DOLP to incidence angle θi (see Sec. 3.2) was done with a look-up table. With the
angle of incidence θi and the orientation angle Φ the slope in X- and Y- direction (s x , s y ) were
calculated Eq. (3.11). Figure 7.3 and Fig. 7.6 display example images of DOLP and the orientation angle Φ. In Fig. 7.4 and Fig. 7.7 the slope in X- and Y-direction of the before shown images
can be seen.
6.6. Calculation of the Height Distribution
With the two slope images (s x , s y ) the surface elevation can be reconstructed with the algorithm developed by Frankot and Chellappa (1988). The principle of this algorithm is shown in
Sec. 3.4. Because the algorithm is based on a Fourier transformations the slope images have
to be cleared of NaNs, since the Fourier transformation is not working with NaNs. The NaN
reconstruction algorithm will be depicted in the next section.
The reconstructed height is then given in pixel and must be scaled with the pixel scale that is
evaluated in Sec. 6.4. An example of a reconstructed height can be seen in Fig. 7.5 and Fig. 7.8.
52
6. Data Processing
6.6.1. NaN-Reconstruction
Slope Image
with NaNs
(red)
Slope Image
smeared out
with a
Gaussian Filter
Not-NaN-mask
smeared out
with the same
Gaussian Filter
Reconstr. Image
by dividing the
smeared slope
image
by the
smeared mask
Figure 6.4.: Block diagram
of the NaN-Correction
In many slope images there are NaNs in the data, because of
too high DOLP values (see Sec. 7.2.2) at the transformation
from DOLP to incidence angle. Since the FFT (Fast Fourier
Transformation) cannot deal with NaNs, we have to get rid
of them. The simple solution is to set all NaNs to zero. This
means for the reconstructed height, that it stays at the same
level, were the slop is zero. Another approach is to fill the
NaNs by smearing the slope images so that the NaNs get
filled up. The process of the reconstruction is described below.
NaN-Correction
1. Detection of the NaNs in a slope image and generating masks (NaN-mask and Not-NaN-mask)
2. Smearing the slope image with a Gaussian filter.
3. Smearing the Not-NaN-mask with the same Gaussian
filter.
4. Correct the effect of darkening in the middle of a big
hole by dividing with the smeared Not-NaN-mask.
5. Use the original image and fill the NaNs with the corrected image
6.7. Timing of the Data Processing
The analysis of the polarimeter can in principle be done online (in real time) if the system is
calibrated in advance. Therefore the timing of all processes is very critical. Thus a timing of
the different steps of the data processing was made and listed in Tab. 6.1. The calculations
were done with Matlab® on a PC with Intel® Core™ i7-3820 Processor with 3.60 GHz and 64 GB
of RAM. The time estimation is done with a data set of the Hamburg experiments with 3000
images. The initial size was 648 × 486 pixel and the final size was 486 × 470 pixel.
Table 6.1.: Timing of the different data processing steps. The times are given to process an image triples
from 3 cameras (Resolution: 648 × 486 pixel) to the output parameters (s x , s y , h ) (Resolution: 486 × 470 pixel).
Data Processing Step
3000 Triples
1 Triple
Calibration (radiometric and geometric, Sec.6.2 and Sec.6.3)
Mapping, Analse Matrix, DOLP and X-,Y-slope (Sec.6.4 and Sec.6.5)
NaN- and Height-Reconstruction (Sec.6.6)
160.14 s
81.69 s
90.77 s
53.38 ms
27.23 ms
30.26 ms
Total
332.60 s
110.87 ms
7. Results
7.1. Results of the Meteor
As described in Sec. 4.1 the Stereo Polarimeter was barely tested before it was used on board of
the ship. Therefore most of the calibration had to be done afterwards. In Sec.4.3 it is mentioned
that chessboard pictures were taken to calibrate the stereo system. Since the polarimeter was
my only area of responsibility of the Stereo Polarimeter, the stereo calibration and the stereo
data analysis aren’t processed until now. As all reflections are specular at the water surface (see
Sec. 3.2), it is not possible to use standard stereo algorithms (Jähne et al., 1994). The main
difficulties of the stereo analysis will be that the illumination from the sky was very inhomogeneous and the sea surface roughness was too low most of the time, so that there is not enough
structure for stereo matching.
It turned out that the analysis of the polarimeter data is challenging as well. First and foremost we do not have the possibility to place a chessboard at the water surface, like in the experiments in Hamburg (see Sec. 6.4). Therefore we had to map the three images of one polarimeter
box with the calibration data from Hanau (Sec. 5.5). Since the cameras were tilted at the ship,
we cannot obtain the exact mapping and the exact scale from this calibration. This problem can
in principle be fixed with some height data from the RSSG1 or from the stereo system. Another
difficulty of the Meteor data is the stereo effect between the three cameras of the polarimeter.
Although if the polarimeter was located 11.14 m from the mean sea surface, the stereo basis of
3.5 cm (Sec. 4.2) from one camera to the next one was sufficient to see a parallax effect in the
images.
To estimate the stereo effect in the images, the change of the disparity is calculated. The
starting point is the Equation of disparity or parallax, see Jähne (2005, p. 221, Eq. (8.3))
p=
b·f
X3
(7.1)
where p is the disparity in pixel, b is the length of the stereo basis, f is the focal length in pixel
and X 3 is the distance from the cameras to the object. To calculate the change of disparity when
the length of the line of sight changes, we have to differentiate the disparity.
b·f
dp
=− 2
d X3
X3
(7.2)
With the data from Tab. 4.3 and Tab. A.2 (b = 3.5 cm, f = 7273 px taken from the camera calibration, X 3 = 11.14 m) we obtain for the change of disparity:
¯
¯
¯ dp ¯
¯
¯
(7.3)
¯ d X ¯ = 2.1 px/m
3
This means, that for a wave magnitude of only 0.5 m, a stereo displacement of 1 px appears
between two nearby cameras. The waves on the ocean were definitely higher and therefore this
effect cannot be ignored.
1 Reflective Stereo Slope Gauge: Kiefhaber et al. (2011)
54
7. Results
To illustrate the stereo effect Fig. 7.1 shows a mapped image of all three cameras in RGB color
coding. In the upper part of the image a shift occurs, whereas in the lower part the shift is barely
visible. This is hard to see with naked eye, hence two profiles from bubbles were taken in the
upper and lower part in the image. The stereo effect appears as a shift of the peak in Fig.7.2.
200
400
600
800
1000
1200
1400
1600
1800
500
1000
1500
2000
2500
Figure 7.1.: Matched example image from the Meteor data illustrated in RGB-colors for the three different
cameras. The red lines in the upper and lower part of the image indicate the position where a profile was
taken. The profiles are shown in Fig. 7.2.
b
0.7
0.7
0.6
0.6
0.5
0.5
relative Intensity
relative Intensity
a
0.4
0.3
0.4
0.3
0.2
0.2
0.1
0.1
0
0
5
10
15
20
25
30
x−direction in Pixel
35
40
45
0
0
5
10
15
20
25
30
x−direction in Pixel
35
40
45
Figure 7.2.: Profiles in x-direction taken from bubbles in image Fig. 7.1. a Profile from a bubble from the
upper right-hand side b Profile from a bubble from the lower left-hand side. The green line represents the
middle camera. Between these two profiles due to the shift from the stereo effect can be seen here.
The stereo effect can in principle be corrected, if we know the declination of the ship and the
exact surface elevation, like from the stereo measurements or from the RSSG measurements.
As mentioned in Sec. 3.5, the polarization and luminosity of the incoming light has to be
considered. Since it is not possible to install an artificial light source which is large enough
in front of the ship, all measurements have to be done during day time. During day time the
measurements can be distorted due to the polarization of the clear sky (see Sec. 3.2.1). The
polarization of the incoming light can be diminished by a cloud cover, which was unfortunately
very rare at the Meteor 91 cruise. Another approach is to take the incoming polarization into
7.2. Results from Hamburg
55
account with a Rayleigh sky model or to measure the polarization with a polarimeter facing the
sky. The latter one is not possible any more for this data set. Solving all these problems was
beyond the scope of this thesis.
7.2. Results from Hamburg
The conditions at the towing tank in Hamburg, where a mechanical wave generators was used,
were completely different than on the Meteor. The waves were mainly moving into one direction (see Sec. 4.5.1), because they were not wind driven. Since the waves were very smooth
and had a relatively long wavelength, it was possible to reduce the resolution of the cameras
(according to the constraints Sec. 3.5) with a 4x4 binning modus (see Sec. 4.5). With a reduced
resolution and a maximum wave amplitude of 0.15 m it was possible to ignore the stereo effect,
which was visible in the Meteor data. A sufficient illumination in the laboratory was important
(see Sec. 3.5 and Sec. 4.5.1), but the binning mode was beneficial for the gathering of light.
Before all experimental conditions will be discussed in Sec. 7.2.2 a set of example images will
be shown. The images are the result of the data processing chain in Sec. 6.
7.2.1. Example Images
A set of example images of the polarimeter are displayed in the following paragraph. All example images are taken from the same series HSVA15_001. The monochromatic water waves
of this series are always moving from the right to the left in the images. The degree of linear
polarisation images and polarization angle Φ images are shown in one set and the slopes in Xand Y-direction (s x , s y ) are displayed as pair. Attention has to be paid to the scale of the images,
since the scale for the X-slope s x is from −0.3 to 0.3 (in degrees: tan−1 0.3 = 16.7°) and for the Yslope s y from −0.2 to 0.2 (in degrees: tan−1 0.2 = 11.3°). The elevation map of the surface given
in centimeters is reconstructed from the two slope images (s x , s y ) (see Sec. 6.6).
a
b
DOLP
Pol. Angle Φ
1
0.9
50
50
40
100
0.8 100
30
150
0.7 150
20
0.6
10
50
200
200
0.5
0
250
250
−10
0.4
300
300
−20
0.3
350
350
−30
0.2
400
400
−40
0.1
450
450
−50
0
100
200
300
400
100
200
300
400
Figure 7.3.: a DOLP and b polarization angle Φ from the series HSVA15_001 at the time t = 40.12 s.
56
7. Results
a
b
Sx
Sy
0.2
0.25
50
50
0.15
0.2
100
100
0.15
0.1
150
0.1
150
200
0.05
200
0.05
0
0
250
250
−0.05
300
−0.1
−0.05
300
350
−0.15 350
400
−0.2
−0.1
400
−0.15
−0.25
450
450
−0.2
100
200
300
400
100
200
300
400
Figure 7.4.: Slope in a X-direction s x and b Y-direction s y from the series HSVA15_001 at the time t = 40.12 s.
Figure 7.5.: Example image of the reconstructed height from the series HSVA15_001 at the time t = 40.12 s.
a
b
DOLP
Pol. Angle Φ
1
0.9
50
50
40
100
0.8 100
30
150
0.7 150
20
0.6
10
50
200
200
0.5
0
250
250
−10
0.4
300
300
−20
0.3
350
350
−30
0.2
400
400
−40
0.1
450
450
−50
0
100
200
300
400
100
200
300
400
Figure 7.6.: a DOLP and b polarization angle Φ from the series HSVA15_001 at the time t = 52.0 s.
7.2. Results from Hamburg
a
57
b
Sx
Sy
0.2
0.25
50
50
0.15
0.2
100
100
0.15
0.1
150
0.1
150
200
0.05
200
0.05
0
250
0
250
−0.05
300
−0.1
−0.05
300
350
−0.15 350
400
−0.2
−0.1
400
−0.15
−0.25
450
450
−0.2
100
200
300
400
100
200
300
400
Figure 7.7.: Slope in a X-direction s x and b Y-direction s y from the series HSVA15_001 at the time t = 52.0 s.
Figure 7.8.: Example image of the reconstructed height from the series HSVA15_001 at the time t = 52.0 s.
7.2.2. Experimental conditions
Illumination
To obtain correct results from polarimetric slope imaging measurements the incoming light
plays an important role, like it is shown in Sec. 7.2.6. For illumination, we installed two LEDSpotlights (see Sec. 4.5.1), facing the ceiling in order to widen the illuminated area. The illuminated area was still too small, which can be seen as dark spots in the image corners. With this
lack of illumination it is not possible to obtain qualitatively good results. Figure 7.9 shows an
example image of the degree of linear polarisation, where the noise becomes very large especially in the lower image corners.
The illumination area plays as well an important role for steep waves with a high curvature.
A steep surface reflects the light from a spot, where the illumination source does not extend to.
This causes high noise levels in these areas.
To quantify the lack of illumination, we can compute the effect of the noise in the raw images
on the noise in the processed DOLP images. In the area marked by the red square in Fig. 7.9,
58
7. Results
DOLP
1
50
0.9
100
0.8
150
0.7
0.6
200
0.5
250
0.4
300
0.3
350
0.2
400
0.1
450
0
100
200
300
400
Figure 7.9.: Example image of the degree of linear polarisationof the series HSVA15_001. Attention should
be paid to the noise at the lower image corners, which comes from a lack of illumination.
the absolute error (in gray values) in the mapped raw images is:

 
151.98
σI 1
 σI 2  =  131.67 
135.51
σI 3

(7.4)
With the propagation of uncertainty (Eq. (3.3)) we obtain for the error of the stokes vector.
v

 u
u  σI 1 2  0.0179 
σS0
u
 σS1  = u
(7.5)
t A ·  σI 2  =  0.0275 
σS2
0.0235
σI 3
The propagation of uncertainty can also be evaluated for DOLP (Eq. (2.49)):
s
µ
¶2 µ
¶2 µ
¶2
∂DOLP
∂DOLP
∂DOLP
σDOLP =
· σS0 +
· σS1 +
· σS2
∂S 0
∂S 1
∂S 2
(7.6)
where
∂DOLP
=−
∂S 0
q
S 02 + S 12
S 02
∂DOLP
S1
= q
∂S 1
S 0 S 02 + S 12
∂DOLP
S2
= q
∂S 2
S 0 S 02 + S 12
Following the rules of propagation of uncertainty for normally distributed errors, we get for
the error of the DOLP σDOLP = 0.0982. The actual noise in the DOLP amounts to σd ol p = 0.0852.
This shows that the noise does come from the lack of proper illumination. Therefore it is clear
that the data quality can be increased significantly by increasing the power of illumination in
future experiments.
Ring waves
A perturbing effect that appeared at the towing tank at the HSVA was the generation of ring
waves. Figure 7.10 shows an example image of the phenomenon.
At the side of the tank a water drain channel was installed. To empty the water drain channel
holes connect the channel with the tank. These holes were located just above the water surface
7.2. Results from Hamburg
59
Sx
0.1
50
0.08
100
0.06
150
0.04
0.02
200
0
250
−0.02
300
−0.04
350
−0.06
400
−0.08
450
−0.1
100
200
300
400
Figure 7.10.: Example image of ring waves shown in the X-component of the slope s x of the series
HSVA15_001.It can be recognized that the ring waves are coming from the side of the towing tank. (Main
wave direction is in X-direction.)
when the water was calm. So, when a wave crest moved close some water flooded the holes.
When the wave trough passed then the hole, the water was streaming out and generating some
small scale ring waves. These ring waves will have an effect on the wave spectra, as one can see
later. This is of importance since we want to compare the wave spectrum of the wave wire with
that of the polarimeter and the wave wire was installed closer to the wave generator than the
polarimeter (see Fig. 4.4).
Stereo Effect
Figure 7.11a shows an example image of degree of linear polarisation where all values that are
larger than DOLP > 1 are marked in blue color. Because DOLP > 1 is unphysical, the error must
come from the data analysis. A notable fact is that this defect is occurring especially where the
reflected structure is very inhomogeneous. For investigation of this error a profile was taken
in x-direction at the same position in the DOLP-image and in the image which was mapped
already (Fig. 7.11b). The position of the profile is indicated by a red line in the images.
Figure 7.12 shows the result of both profiles of DOLP and the mapped images. By comparison
of the two profiles it it is noticeable that DOLP decreases when the blue profile decrease and
it increases when the green line is decreasing. Hence if the two dips of the blue and the green
image profiles would coincide, DOLP were not too high.
As mentioned in the discussion of the Meteor data (see Sec. 7.1) a stereo effect arise due to
the design of the polarimeter, with a stereo basis of 3.5 cm from one camera to the next one.
Since we have mapped the images with a perspective transformation onto the water surface,
the parallax was artificially set to zero at the water surface. The reflected background is therefore far away from the cameras. A parallax shift can be recognized in the images, if the reflection
background is inhomogeneous. This effect is inherent in the simple polarimeter design, but its
consequences can be eliminated by making the illuminated area as homogeneous as possible.
7.2.3. Slope Images
The primary result of the polarimeter is the degree of linear polarisation (DOLP) and the polarization angle Φ (see Sec. 6.5). An example image of each is shown in Fig.7.3 and 7.6. These results were then converted to the slope in X- and Y-direction (s x , s y ) (see Sec. 3.1) with Eq.(3.11).
60
7. Results
a
1
50
0.9
100
0.8
150
0.7
b
105
110
0.6
200
115
0.5 120
250
0.4 125
300
0.3
350
130
0.2
400
0.1
135
65
70
75
80
85
90
95
100
105
450
0
100
200
300
400
Figure 7.11.: a Example image of DOLP where all areas with DOLP > 1 are marked with blue color. b A
detail of the same image of the 3 cameras in RGB color coding, mapped at the water surface. The red lines
in both images indicate the position of the profile taken there. (see Fig. 7.12)
b
a
0.7
1.2
1.15
0.6
1.1
0.5
relative Intensity
1.05
DOLP
1
0.95
0.9
0.85
0.4
0.3
0.2
0.8
0.1
0.75
0.7
0
5
10
15
x−direction in Pixel
20
25
0
0
5
10
15
x−direction in Pixel
20
25
Figure 7.12.: Profiles of a DOLP and b mapped image in RGB color coding, which is taken from Fig. 7.11
Figure 7.4 and Fig.7.7 show the result of the conversion respectively.
As one can see with the naked eye the the slope in X-direction s x is determined by the polarization angle Φ much more than by degree of linear polarisation and vise versa for slope in
Y-direction s y . This result shows obvious that the incidence angle θ, specified from DOLP, is the
main factor for the Y-Component of the slope s y . Thus the resolution in Y-direction is limited
to a range from θ = 0° − 53°, which corresponds to a slope from 0.0 to 1.3, due to conversion
from DOLP to θ.
7.2.4. Height Reconstruction
Figure 7.5 and Fig. 7.8 show an example of the reconstructed height in a 3D representation. Noticeable is the smoothness of the reconstructed height although there has been a lot of noise in
the corner of the images (see Sec. 7.2.2). This noise is diminished by the integration of the slope
(see Sec. 3.4). The correct mean value can be achieved by integration, if the noise scatters Gaussian around this mean value. The smooth result of the height reconstruction demonstrates the
7.2. Results from Hamburg
61
quality of the polarimetric slope imaging results.
As mentioned in Sec. 7.2.2 the degree of linear polarisation is sometimes larger than one
(DOLP > 1). At the conversion from DOLP to the incidence angle θ all DOLP values above one
will be set to NaN. Because the Fourier transformation in the height reconstruction routine
cannot deal with NaNs, the areas with NaNs in the slope images (s x , s y ) have to be corrected
first. This can be done in two ways. The simple way is to set all NaNs to zero. This means
that the height will remain on the same level where the slope is set to zero. Another approach
is depicted in Sec. 6.6.1, where the NaN areas are filled up by an reconstruction algorithm.
A comparison of the two NaN correction methods will be discussed in this section. The first
method will be referred to as "NaN = 0" and the other one as "NaN reconstruction". Figure
7.13a depicts a reconstructed elevation map of the water surface with NaN reconstruction and
Fig. 7.13b shows a detail image of the height difference with and without NaN reconstruction.
The red line indicates the position where a profile was taken from the images.
a
b
Height in [cm]
50
Height difference in [cm]
4
100
3
120
100
0.2
0.15
140
2
0.1
160
150
1
200
0
0.05
180
200
0
250
220
−0.05
−1
300
240
350
−0.1
−2
260
400
−0.15
−3 280
450
−4
100
200
300
400
300
−0.2
50
100
150
200
Figure 7.13.: a Example image of the series HSVA15_001 of the Height with NaN reconstruction. b Detail
of the height difference between the two height reconstructions, with different NaN corrected slopes. The
red line indicates where a profile was taken (see Fig.7.15a)
To understand what is happening, one has to consider the slope images with the two different NaN correction methods. This is depicted in Figure 7.14. The difference in the two methods
is quite prominent, since in the slope image with NaN = 0 the defects are easily visible, whereas
in the images with NaN reconstruction they are not.
Figure 7.15a shows a profile of the two different height distribution, calculated with different
NaN corrected slope images. It can be seen that the blue curve with the NaN reconstruction
is much smoother and hence more physical than the curve where the NaNs in the slope image
were set to zero. Figure 7.15 shows the corresponding profile of the X-slope image, where the
two methods become obvious. The blue and the red curve are the same except for the NaN
part, where the red curve is set to zero and the blue curve is continuing quite smoothly. The
effect of the zero part in the red slope curve can also be seen in the red height curve as a bend
to the horizontal. This is evident since a slope of zero is equal to no change in height.
62
7. Results
a
b
Sx
Sx
100
0.5
100
0.5
120
0.4
120
0.4
140
0.3
140
0.3
160
0.2
160
0.2
180
0.1
180
0.1
200
0
200
0
220
−0.1 220
−0.1
240
−0.2 240
−0.2
260
−0.3 260
−0.3
280
−0.4 280
−0.4
300
−0.5 300
50
100
150
−0.5
200
50
100
150
200
Figure 7.14.: Detail image of the slope in X-direction s x a with NaN reconstruction and b with NaN = 0. The
red line indicates the position of the profile that is shown in Fig. 7.15b.
a
b
0.5
1.5
NaN = 0
NaN reconst.
1
0.4
0.5
0.3
0.2
−0.5
Slope
Height [cm]
0
−1
0.1
−1.5
0
−2
−0.1
−2.5
NaN = 0
NaN reconst.
−3
−0.2
10
20
30
40
50
Pixel
60
70
80
90
100
10
20
30
40
50
Pixel
60
70
80
90
100
Figure 7.15.: a Profile of the two different height distributions, calculated from different NaN corrected slope
images. b Profile of two slope images with the two different NaN correction methods.
7.2.5. Monochromatic Height Spectra
At the towing tank a wave wire was installed (see Sec. 4.5.1) for comparison reasons. To compare the data from the polarimeter with the wave wire, height power spectra were computed.
The height power spectra of the polarimeter are computed as the mean of time series for every pixel of an area of interest. This area of interest, reaching from x = 120 px − 370 px and
y = 220 px − 340 px, was set to avoid errors from the missing illuminations in the corners (see
Sec. 7.2.2) or from the stereo effect where the background was inhomogeneous (see Sec. 7.2.2).
For a better comparability the data from the wave wire was resampled from 1000 Hz to the
same frequency as the polarimeter (25 Hz). The same time range as for the polarimeter was
chosen before the height power spectra of the wave wire were computed. Figure 7.16 and Fig.
7.17 show a height power spectrum of the wave wire compared to the polarimeter for the series
HSVA15_001 and the series HSVA15_002, respectively.
The measurements fit quite well above 0.9 Hz. Below this range the polarimeter underestimates the energy, especially at the peak frequency of 0.5 Hz. This is because the wavelength
below 0.9 Hz (the corresponding wavelength is > 1.9 m) becomes longer than the length of the
imaged water surface area of the polarimeter (see Tab. 4.5). This effect can be seen even for
7.2. Results from Hamburg
63
0
10
Wave wire
Polarimeter
Power spectral density [m2/s]
−2
10
−4
10
−6
10
−8
10
−10
10
0.25
0.5
1
Frequency [1/s]
2
3
4
5
Figure 7.16.: Measurement HSVA15_001 (monochromatic waves): Comparison of the spectrum from the
polarimeter and the wave wire.
0
10
Wave wire
Polarimeter
Power spectral density [m2/s]
−2
10
−4
10
−6
10
−8
10
−10
10
0.25
0.5
1
Frequency [1/s]
2
3
4
5
Figure 7.17.: Measurement HSVA15_002 (monochromatic waves): Comparison of the spectrum from the
polarimeter and the wave wire.
shorter wave lengths. Since the height distribution is computed from the slope distribution (see
Sec. 6.6) the mean height is lost with this method. Waves cannot be measured if the assumption, that the mean wave height is zero in every image, is not valid. This effect is responsible for
the underestimation of the energy below 0.9 Hz.
Figure 7.18 shows the power spectrum of monochromatic waves for two polarimeter series
(HSVA15_001, HSVA15_002), where the first one was taken about 130 s before the second one.
Although the second time sequence (HSVA15_002) matches with the wave wire very well, the
energy above 2.5 Hz is increased in the second sequence compared to the first one. This situation is depicted in Fig. 7.18. However it should be noted that in this frequency band there is
hardly any energy and therefore the deviations are very small in total. The additional energy in
the high frequency range comes from disturbances, which develop over time with the propagating waves. These disturbances could come from an inhomogeneity in the wave generation,
from the already discussed ring waves or from reflections of the waves at the end of the tow-
64
7. Results
−3
10
Sequence 1
Sequence 2
−4
Power spectral density [m2/s]
10
−5
10
−6
10
−7
10
−8
10
−9
10
0.25
0.5
1
Frequency [1/s]
2
3
4
5
Figure 7.18.: Spectra from two sequences of the measurement HSVA15, where the first sequence was
taken before the second one. A clear rise of the energy at higher frequencies can be seen.
ing tank. Since this effect evolves with time and can be seen in the wave wire data, it is a real
physical effect and not an error in the data processing chain of the polarimeter.
7.2.6. Continuous Height Spectra
A continuous wave spectrum was generated by combining many waves with different frequencies and random phase relations. Figure 7.19 and Fig. 7.20 demonstrate the comparison of
a wave wire spectrum with a polarimeter spectrum of the series HSVA16_001 and the series
HSVA16_002 respectively. The continuous spectra were computed with logarithmically spaced
bins (Tröbs and Heinzel, 2006) to obtain a smoother curve for the higher frequency range.
−2
10
Power spectral density [m2/s]
Wave wire
Polarimeter
−4
10
−6
10
−8
10
−10
10
0.25
0.5
1
Frequency [1/s]
2
3
4
5
Figure 7.19.: Continuous wave spectra of the series HSVA16_001 of wave wire and polarimeter. The incline
below 0.9 Hz becomes obvious.
As seen in Fig. 7.19 and Fig. 7.20 the continuous spectra show the same attitude below 0.9 Hz
as the monochromatic spectra Fig. 7.16 and Fig. 7.17. This effect was already discussed in the
previous section 7.2.5. There is evidence that 0.9 Hz is the lower limit of the polarimeter due to
7.2. Results from Hamburg
65
−2
10
Power spectral density [m2/s]
Wave wire
Polarimeter
−4
10
−6
10
−8
10
−10
10
0.25
0.5
1
Frequency [1/s]
2
3
4
5
Figure 7.20.: Continuous wave spectra of the series HSVA16_002 of wave wire and polarimeter. The incline
below 0.9 Hz becomes obvious.
the area of interest at the water. Above this limit the spectrum of the polarimeter fits quite well
to the spectrum of the wave wire.
In Fig. 7.20 the energy of the polarimeter spectrum is slightly increased for the higher frequencies compared to the wave wire spectrum. This effect can be seen in almost all continuous spectra from the second measurement series. Therefore is seems likely that increase comes
from the disturbances of ring waves or the reflection of the waves at the end of the towing tank.
Figure 7.21 shows the same behavior as Fig. 7.18, but with a continuous wave spectrum. The
first sequence (HSVA16_001) was taken about 130 s before the second sequence (HSVA16_002).
−4
10
Sequence 1
Sequence 2
−5
Power spectral density [m2/s]
10
−6
10
−7
10
−8
10
−9
10
−10
10
0.25
0.5
1
Frequency [1/s]
2
3
4
5
Figure 7.21.: Comparison of two spectra of the same series (HSVA16) but from different sequences. The
build up of the high frequency disturbances over time is observable here.
For the continuous spectrum the consequence of the build up of the small scale waves is
enforced due to the interference of many different wave lengths. Therefore the influence of the
increase of the small scale waves over time can be seen clearly in the comparison of the two
sequences in Fig. 7.21.
66
7. Results
7.2.7. Polarimeter Characteristics
As discussed already in the previous sections the polarimeter has some immanent properties,
which are given by the setup and the monitored water surface area. There are some external
characteristics as well which result from the position at the towing tank and the disturbing
effects on waves from ring waves, surface films or reflections of the water waves. To isolate
these effects is very hard. Yet it is possible to determine the characteristics or transfer function
of the polarimeter for this setup.
The transfer function of the polarimeter is obtained by dividing the many continuous wave
spectra from the polarimeter by the corresponding wave spectra of the wave wire. This was
done for nine continuous logarithmic spectra. The mean of these nine spectra can be seen in
Fig. 7.22.
1
10
0
Transfer Function
10
−1
10
−2
10
−3
10
0.5
1
2
Frequency [1/s]
3
4
5
Figure 7.22.: Response function of the polarimeter at the towing tank. The response function was calculated by the mean of nine continuous spectra. The polarimeter shows a filtering behavior below the cutoff
frequency of 0.88 Hz. The cutoff frequency was determined as intersection of the two red lines fitted to the
transfer function.
Figure 7.22 shows the response of the polarimeter in Fourier space. The cutoff at 0.88 Hz
is explicitly visible by the roll-off below this frequency. This decrease could be corrected by
measuring the long scale waves, which could be done by combining height measurements with
the polarimeter data. It is sufficient to know the height information at one known point in a
polarimeter image to correct the lost mean height which is lost due to integration.
It is possible to apply the transfer function to the monochromatic spectrum, like in Fig. 7.16.
Figure 7.23 shows a polarimeter spectrum corrected with the response function. Since the response function is calculated with the continuous spectra, the correction is independent of the
monochromatic spectrum.
As expected, with the correction the spectra fit quite well even at the main peak at 0.5 Hz.
7.2. Results from Hamburg
67
0
10
Wave wire
Polarimeter Corr
Power spectral density [m2/s]
−2
10
−4
10
−6
10
−8
10
−10
10
0.25
0.5
1
Frequency [1/s]
2
3
4
5
Figure 7.23.: Spectrum of the series HSVA15_001 (see Fig. 7.16). The spectrum of the polarimeter is
corrected by the response function of the polarimeter which was calculated independently of this spectrum.
8. Conclusion and Outlook
8.1. Conclusion
The approach of this thesis was to develop a simple polarimetric slope sensing instrument for
water waves. Unlike common imaging polarimeters, where multiple cameras share a single
custom lens, it consists of three cameras aligned in parallel, each equipped with a polarization
filter and a standard lens (Sec. 4.2).
Due to the simple setup additional expenses have to be paid in the image processing part. Especially the calibration of the intrinsic, extrinsic, and distortion camera parameters (Sec. 5) was
one of the key points for successful measurements. Since three images have to be mapped onto
each other with a projective transformation (Sec. 6.4), the correction of the distortion from the
lenses was crucial, see Sec. 5.4.4. To gain the intrinsic and distortion parameters of the cameras, a special calibration procedure with a custom built target was performed, see Sec. 5.5.The
heart of the polarimeter is the analyzer matrix (Sec. 3.1), which transforms the incoming intensities into the parameters of the Stokes vector (Sec. 2.2.2). The calibration of the analyzer
matrix (Sec. 5.6) was verified in a specifically designed inclination test (see Sec. 5.6.1).
The polarimeter was deployed to two experiments (Sec. 4.1 and 4.4) where its capabilities
were tested. The instrument was collecting data during the M91 cruise on board of the German
research vessel Meteor. While the full evaluation of this data set is beyond the scope of this thesis, an analysis of the polarimetric slope imaging measurements is presented in Sec. 7.1. The
second experiment was conducted at the small towing tank at the Hamburgische SchiffsbauVersuchsanstalt (HSVA) in Hamburg. This tank is equipped with a mechanical wave generator
and a wave wire for reference measurements (see Sec. 4.5.1). The polarimeter was modified to
fit the needs of laboratory measurements (Sec. 4.5).
The capabilities of the polarimeter in capturing the slope distribution of mechanically generated waves are demonstrated (Sec. 7.2). The necessity of a powerful illumination and perturbations of the measurements due to imperfections of the laboratory setup are discussed (Sec.
7.2.2). The effect of data gaps on the reconstruction of surface elevation (wave height) from the
measured wave slope is discussed and different approaches are compared (Sec. 7.2.4).
Comparative measurements of the polarimeter with the wave wire are presented both for
nearly monochromatic waves (Sec. 7.2.5) and for continuous spectra consisting of a random
superposition of waves with different frequencies (Sec. 7.2.6). Wave height power spectra computed from the polarimeter data are shown to agree with reference measurements for wave
frequencies above 0.9 Hz. Since longer waves have wavelengths comparable to the dimensions
of the footprint of the polarimeter, their mean height is no longer zero at all times as was assumed in the height reconstruction. This leads to a sharp cutoff in the transfer function of the
polarimeter (see Sec. 7.2.7). The derived transfer function can be applied to correct measured
spectra, allowing the polarimeter to measure waves for a wide range of frequencies.
70
8. Conclusion and Outlook
8.2. Outlook
The simple design of the imaging polarimeter presented in this thesis allows for building inexpensive measurement instruments. The trade-off is the inherent stereo disparity, which causes
problems especially for shipborne measurements in which the relative variations of the distance to the water surface are large. To avoid the use of an expensive custom lens (as described
by Pezzaniti et al. (2009)), the three cameras could be placed behind a system of beam splitters
so they are virtually placed in the same position.
The experiences with the polarimeter during the M91 cruise show, that it is crucial to include measurements of the polarization of the sky in the data processing scheme to reduce the
constraints on environmental conditions under which the polarimeter can operate.
Apart from its potential to measure small-scale waves on the ocean, an imaging polarimeter
might also be useful in other scientific disciplines, e.g. the inspection of clouds (Pust and Shaw,
2006), the sky (Lee, 1998) or the detection of volcanic plumes (Sassen et al., 2007). In all these
areas, the strength of the polarimeter technique, to make the normally invisible polarization
visible, could gain new insights into the mysteries of our world.
Bibliography
G. Balschbach. Untersuchungen statistischer und geometrischer Eigenschaften von Windwellen
und ihrer Wechselwirkung mit der wasserseitigen Grenzschicht. Dissertation, Institut für
Umweltphysik, Fakultät für Physik und Astronomie, Univ. Heidelberg, 2000.
P. Barsic and C. Chinn. Sea surface slope recovery through passive polarimetric imaging. In
Oceans, 2012, pages 1–9, 2012.
A. Benetazzo. Measurements of short water waves using stereo matched image sequences.
Coastal Engineering, 53(12):1013–1032, Dec. 2006.
J.-Y. Bouguet. Camera calibration toolbox for matlab, http://www.vision.caltech.edu/bouguetj/
calib_doc/. 2008.
C. Brosseau. Fundamentals of polarized light. A Wiley-Interscience publication. Wiley, New
York [u.a.], 1998. Includes indexes.
M. Daimon and A. Masumura. Measurement of the refractive index of distilled water from the
near-infrared region to the ultraviolet region. Appl. Opt., 46(18):3811–3820, Jun 2007.
M. A. Donelan, J. Hamilton, and W. H. Hui. Directional spectra of wind-generated waves. Royal
Society of London Philosophical Transactions Series A, 315:509–562, Sept. 1985.
M. A. Donelan and R. Wanninkhof. Gas transfer at water surfaces - conepts and issues. In M. A.
Donelan, W. M. Drennan, E. S. Saltzman, and R. Wanninkhof, editors, Gas Transfer at Water
Surfaces. American Geophysical Union, 2002.
D. A. Forsyth and J. Ponce. Computer Vision: A Modern Approach. Prentice Hall, 2002.
R. T. Frankot and R. Chellappa. A method for enforcing integrability in shape from shading
algorithms. IEEE Trans. Pattern Anal. Mach. Intell., 10(4):439–451, 1988.
N. M. Frew, E. J. Bock, U. Schimpf, T. Hara, H. Haußecker, J. B. Edson, W. R. McGillis, R. K.
Nelson, B. M. McKeanna, B. M. Uz, and B. Jähne. Air-sea gas transfer: Its dependence on
wind stress, small-scale roughness, and surface films. J. Geophys. Res., 109:C08S17, 2004.
D. Fuß. Kombinierte Höhen- und Neigungsmessung von winderzeugten Wasserwellen am Heidelberger Aeolotron. Dissertation, Institut für Umweltphysik, Fakultät für Physik und Astronomie, Univ. Heidelberg, Heidelberg, Germany, 2004.
H. Haferkorn. Optik. Johann Ambrosius Barth, Leipzig, 3 edition, 1994.
R. Hartley and A. Zisserman. Multiple View Geometry in Computer Vision. Cambridge Univ
Press, 2 edition, 2003.
J. Heikkilä and O. Silven. A four-step camera calibration procedure with implicit image correction. In IEEE Conference on Computer Vision and Pattern Recognition, pages 1106–1112,
1997.
72
Bibliography
G. Horváth, A. Barta, J. Gál, B. Suhai, and O. Haiman. Ground-based full-sky imaging polarimetry of rapidly changing skies and its use for polarimetric cloud detection. Appl. Opt., 41(3):
543–559, Jan 2002.
J. D. Jackson. Classical Electrodynamics. Wiley, 3rd edition, 1998.
B. Jähne. Digital Image Processing. Springer, Berlin, 6 edition, 2005.
B. Jähne, P. Geißler, and H. Haußecker, editors. Handbook of Computer Vision and Applications.
Academic Press, San Diego, 1999.
B. Jähne, J. Klinke, and S. Waas. Imaging of short ocean wind waves: a critical theoretical review.
J.Opt.Soc.Am., 11:2197–2209, 1994.
B. Jähne, K. O. Münnich, R. Bösinger, A. Dutzi, W. Huber, and P. Libner. On the parameters
influencing air-water gas exchange. J. Geophys. Res., 92:1937–1950, Feb. 1987.
G. Kattawar and C. Adams. Stokes vector calculations of the submarine light field in an
atmosphere-ocean with scattering according to a rayleigh phase matrix: effect of interface refractive index on radiance and polarization. Limnology and oceanography, 34(8):1453–1472,
1989.
D. Kiefhaber, R. Rocholz, and B. Jähne. Improved optical instrument for the measurement of
water wave statistics in the field. In S. Komori, W. McGillis, and R. Kurose, editors, Gas Transfer at Water Surfaces 2010, pages 524–534, 2011.
R. L. Lee. Digital imaging of clear-sky polarization. Appl. Opt., 37(9):1465–1476, Mar 1998.
M. S. Longuet-Higgins, D. E. Cartwright, and N. D. Smith. Observations of the directional spectrum of sea waves using the motions of a floating buoy. In Ocean Wave Spectra, proceedings of
a conference, Easton, Maryland, pages 111–136. National Academy of Sciences, Prentice-Hall,
1963.
T. Luhmann. Nahbereichsphotogrammetrie: Grundlagen, Methoden und Anwendungen. Wichmann, Heidelberg, 3 edition, 2010.
J. L. Pezzaniti, D. Chenault, M. Roche, J. Reinhardt, J. P. Pezzaniti, and H. Schultz. Four camera
complete Stokes imaging polarimeter. In Polarization: Measurement, Analysis, and Remote
Sensing VIII, volume 6972 of SPIE Proc., page 69720J, 2008.
J. L. Pezzaniti, D. Chenault, M. Roche, J. Reinhardt, and H. Schultz. Wave slope measurement
using imaging polarimetry. In Ocean Sensing and Monitoring, volume 7317 of SPIE Proc.,
page 73170B, 2009.
N. Pust and J. A. Shaw. Imaging spectropolarimetry of cloudy skies. Proc. SPIE, 6240:624006–
624006–5, 2006.
R. Rocholz. Spatiotemporal Measurement of Short Wind-Driven Water Waves. Dissertation,
Institut für Umweltphysik, Fakultät für Physik und Astronomie, Univ. Heidelberg, 2008.
K. Sassen, J. Zhu, P. Webley, K. Dean, and P. Cobb. Volcanic ash plume identification using
polarization lidar: Augustine eruption, alaska. Geophysical Research Letters, 34(8):n/a–n/a,
2007.
Bibliography
73
U. Schimpf, L. Nagel, and B. Jähne. First results of the 2009 sopran active thermography pilot
experiment in the baltic sea. In S. Komori, W. McGillis, and R. Kurose, editors, Gas Transfer
at Water Surfaces 2010, pages 358–367, 2011.
J. R. Schott. Fundamentals of Polarimetric Remote Sensing. SPIE Press, 2009.
A. Schumacher. Stereophotogrammetrische wellenaufnahmen. Wissenschaftliche Ergebnisse
der Deutschen Atlantischen Expedition auf dem Forschungs- und Vermessungsschiff Meteor,
1925 - 1927, 1939.
D. J. Stilwell. Directional energy spectra of the sea from photographs. J.Geophys.Res., 74:1974–
1986, 1969.
R. Szeliski. Computer Vision Algorithms and Applications. Springer, London, 2011.
M. Tröbs and G. Heinzel. Improved spectrum estimation from digitized time series on a logarithmic frequency axis. Measurement, 39(2):120 – 129, 2006.
E. Trucco and A. Verri. Introductory Techniques for 3D Computer Vision. Prentice Hall, Upper
Saddle River, New Jersey, 1998.
R. Y. Tsai. A versatile camera calibration technique for high-accuracy 3D machine vision metrology using off-the-shelf tv cameras and lenses. IEEE journal of Robotics and Automation, 3(4):
323–344, 1987.
G. Videen, Y. Yatskiv, and M. Mishchenko. Photopolarimetry in Remote Sensing. NATO Science Series II: Mathematics, Physics and Chemistry ; 161 ; SpringerLink : Bücher. Springer
Netherlands, Dordrecht, 2005.
C. Zappa, M. Banner, H. Schultz, A. Corrada-Emmanuel, L. Wolff, and J. Yalcin. Retrieval of
short ocean wave slope using polarimetric imaging. Meas. Sci. Technol., 19:055503 (13pp),
2008.
C. J. Zappa, M. L. Banner, H. Schultz, J. Gemmrich, R. P. Morison, D. LeBel, and T. Dickey. An
overview of sea state conditions and air-sea fluxes during RaDyO. J. Geophys. Res., Oceans,
117:C00H19, 2012.
X. Zhang. An algorithm for calculating water surface elevations from surface gradient. Experiments in Fluids, 21:43–48, 1996.
Z. Zhang. A flexible new technique for camera calibration. Technical Report MSR-TR-98-71,
Microsoft, Redmond, WA, USA, 1998.
Z. Zhang. A flexible new technique for camera calibration. IEEE Transactions on Pattern Analysis and Machine Intelligence, 22(11):1330–1334, November 2000.
W. Zinth and U. Zinth. Optik. Oldenbourg, München, 4., aktualisierte auflage edition, 2013.
A. Appendix
A.1. Rotation Matrices
In this thesis this convention for rotational matrices is used.
1. Rotation around the Z-axis with the angle κ
X = Rκ · x
 

 
X
cos κ − sin κ 0
x
 Y  =  sin κ cos κ 0  ·  y 
Z
0
0
1
z

(A.1)
2. Rotation around the Y-axis with the angle ϕ
X = Rϕ · x

 
 
x
cos ϕ 0 sin ϕ
X
 Y =
0
1
0 · y 
z
− sin ϕ 0 cos ϕ
Z

(A.2)
3. Rotation around the X-axis with the angle ω
X = Rω · x

 
 
x
1
0
0
X
 Y  =  0 cos ω − sin ω  ·  y 
z
0 sin ω cos ω
Z

(A.3)
In general rotation matrices are orthonormal, which means:
R · RT = RT · R = I
R −1 = R T
und
det(R) = 1
(A.4)
A general rotation can be described as a rotation first around the Z-axis, then the Y-axis and
at last the X-axis. This gives for the general rotation matrix:

r 11 r 12 r 13
R = R ω · R ϕ · R κ =  r 21 r 22 r 23 
r 31 r 32 r 33


cos ϕ cos κ
− cos ϕ sin κ
sin ϕ
=  cos ω sin κ + sin ω sin ϕ cos κ cos ω cos κ − sin ω sin ϕ sin κ − sin ω cos ϕ 
sin ω sin κ − cos ω sin ϕ cos κ sin ω cos κ + cos ω sin ϕ sin κ cos ω cos ϕ

(A.5)
76
A. Appendix
The rotation angles can be determined by the coefficients of the rotation matrix R.
sin ϕ = r 13
r 23
tan ω = −
r 33
r 12
tan κ = −
r 11
or
sin ϕ = r 13
r 33
cos ω =
cos ϕ
r 11
cos κ =
cos ϕ
(A.6)
A.2. Target in Hanau
A.2. Target in Hanau
16,7 cm
16,7 cm
50,1 cm
50,1 cm
Figure A.1.: 2D-drawing of the target in Hanau for the geometric calibration
77
78
A. Appendix
A.3. Intrinsic Parameters
The intrinsic parameters where calculated with the Camera Calibration Toolbox for Matlab
(Bouguet, 2008). Table A.1 shows all parameters for the setup in Hamburg, table A.2 shows
all parameters for the setup at the Meteor. The specifications of the different setups with pixel
size and lens properties can be found in Tab. 4.5 for the Hamburg setup and in Tab. 4.3 for the
Meteorsetup.
Table A.1.: All important parameters of the Camera Calibration Toolbox for Matlab for the setup in Hamburg
buf1A
buf1B
buf1C
Image Size
[pixel]
Nx
Ny
648
486
648
486
648
486
Focal length
[pixel]
fx
fy
895.55 ± 8.58
898.53 ± 8.74
902.33 ± 9.09
904.41 ± 9.25
900.45 ± 9.11
902.38 ± 9.26
Principal Point
[pixel]
cc x
cc y
297.61 ± 8.41
240.06 ± 7.41
296.61 ± 8.97
250.61 ± 7.82
290.14 ± 9.04
253.34 ± 7.75
Radial
distortion
k1
k2
−0.22546 ± 0.01039
0.09437 ± 0.05434
−0.22637 ± 0.01083
0.08929 ± 0.05449
−0.22648 ± 0.01042
0.09834 ± 0.04815
Tangential
distortion
k3
k4
0.00063 ± 0.00089
−0.00139 ± 0.00088
0.00007 ± 0.00092
−0.00183 ± 0.00093
−0.00072 ± 0.00091
−0.00277 ± 0.00094
Pixel
error
er r x
er r y
0.07688
0.23094
0.08212
0.23628
0.08937
0.23747
Nx
Ny
fx
fy
cc x
cc y
k1
k2
k3
k4
er r x
er r y
Image Size
[pixel]
Focal length
[pixel]
Principal Point
[pixel]
Radial
distortion
Tangential
distortion
Pixel
error
0.47947
0.51058
−0.00392 ± 0.00035
0.00665 ± 0.00040
−0.14323 ± 0.00478
0.00000 ± 0.00000
1203.79 ± 13.22
1044.38 ± 10.66
7248.82 ± 8.56
7252.13 ± 8.57
2592
1944
buf0A
0.45109
0.49252
−0.00389 ± 0.00032
0.00772 ± 0.00036
−0.16024 ± 0.00457
0.00000 ± 0.00000
1181.97 ± 12.69
1056.05 ± 9.95
7228.17 ± 8.18
7229.88 ± 8.18
2592
1944
buf0B
0.46587
0.49707
−0.00412 ± 0.00034
0.00642 ± 0.00039
−0.14743 ± 0.00466
0.00000 ± 0.00000
1172.29 ± 12.88
1027.34 ± 10.49
7227.83 ± 8.33
7229.15 ± 8.34
2592
1944
buf0C
0.46208
0.48969
−0.00504 ± 0.00030
0.00810 ± 0.00034
−0.17536 ± 0.00458
0.00000 ± 0.00000
1161.67 ± 12.30
1042.77 ± 10.16
7266.24 ± 8.24
7272.49 ± 8.26
2592
1944
buf1A
0.49416
0.52405
−0.00598 ± 0.00026
0.00893 ± 0.00028
−0.18418 ± 0.00398
0.00000 ± 0.00000
1201.99 ± 12.01
1037.89 ± 8.79
7229.58 ± 8.20
7235.45 ± 8.22
2592
1944
buf1B
Table A.2.: All important parameters of the Camera Calibration Toolbox for Matlab for the setup at the Meteor
0.42433
0.52919
−0.00489 ± 0.00029
0.00774 ± 0.00040
−0.17187 ± 0.00415
0.00000 ± 0.00000
1214.14 ± 15.89
1138.04 ± 8.97
7207.72 ± 8.29
7215.01 ± 8.32
2592
1944
buf1C
A.3. Intrinsic Parameters
79
Danksagung
Ich möchte mich ganz herzlich bei Prof. Dr. Bernd Jähne für die Betreuung dieser Arbeit bedanken. Es wurde mir ermöglicht an einer spannenden Messkampagne in Hamburg sowie
an einer interessanten Kalibrierkampagne in der Studiohalle von AEON teilzunehmen. Vielen Dank!
Des Weiteren möchte ich mich für die freundliche Übernahme der Zweitkorrektur bei Herrn
Priv.-Doz. Dr. Christoph S. Garbe bedanken.
Ein ganz großer Dank geht natürlich an die gesamte Arbeitsgruppe der "Windis" am IUP für
die gute Atmosphäre, die netten Kaffeepausen, die hilfreichen wissenschaftlichen Diskussionen und die große Hilfsbereitschaft.
Bedanken möchte ich mich außerdem bei Roland Rocholz, der mir beim flotten Aufbau des
Polarimeters tatkräftig und mit Sachverstand geholfen hat.
Ein großer Dank geht besonders an Leila Nagel, die zusammen mit Daniel Kiefhaber das Polarimeter auf der Meteor aufgebaut und betrieben hat.
Daniel Kiefhaber gebührt mein allergrößter Dank. Als betreuender Doktorand war er mir
stets mit einem sehr großen Engagement zur Seite und half mir mit seiner Expertise. Trotzdem ließ er mir den nötigen Freiraum, so dass ich meine eigenen Ideen ausprobieren und so
wichtige Erfahrungen im Bereich der Bildverarbeitung sammeln konnte. Er hat auch das Polarimeter auf der Meteor erfolgreich aufgebaut und betrieben. Des Weiteren half er mir bei
der Planung und Durchführung der Messkampagne in Hamburg und der Kalibrierkampagne
in Hanau. Dafür und für viele Kleinigkeiten, die hier leider keinen Platz finden, vielen Dank.
Zu guter Letzt möchte ich noch meiner Freundin, Svenja Reith, meinen Eltern, Maria und
Franz Bauer, meinen Geschwistern, Benedikt, Mirjam und Teresa, meinen Freunden, Clemens
Schwingshackl und Neta Tsur, von ganzen Herzen für die großartige Unterstützung in den letzten Monaten bedanken.
Allen, die mir bei meiner Arbeit geholfen haben und hier aber leider nicht explizit erwähnt
sind, auch ein herzliches DANKE!
Erklärung:
Ich versichere, dass ich diese Arbeit selbstständig verfasst habe und keine anderen als die angegebenen Quellen und Hilfsmittel benutzt habe.
Heidelberg, den 4. November 2013
...........................................