Jehle PhD2006

Jehle PhD2006
Dissertation
submitted to the
Combined Faculties for the Natural Sciences and for Mathematics
of the Ruperto-Carola University of Heidelberg, Germany
for the degree of
Doctor of Natural Sciences
Presented by
Diplom-Physicist: Markus Jehle
born in: Bad Säckingen
Oral examination: 20.12.2006
Spatio-Temporal Analysis of
Flows Close to Free Water Surfaces
Referees:
Prof. Dr. Bernd Jähne
Prof. Dr. Christoph Cremer
Zusammenfassung
Für Untersuchungen des Gasaustausches zwischen Atmosphäre und Ozean werden Kenntnisse
über das Strömungsfeld in und unter der wasserseitigen viskosen Grenzschicht benötigt. Hierfür wurde eine neuartige Messtechnik zur raumzeitlichen Analyse von Strömungen nah an der
Wasseroberfläche entwickelt.
Ein Flüssigkeitsvolumen wird von LEDs durchleuchtet. Kleine sphärische Teilchen werden
der Flüssigkeit beigemengt und dienen als Tracerpartikel. Eine Kamera, die von oben auf die
Wasseroberfläche gerichtet ist, nimmt Bildsequenzen auf. Der Abstand einer Kugel zur Wasseroberfläche wird durch einen Licht absorbierenden Farbstoff kodiert. Indem man LEDs zweier
verschiedener Wellenlängen benutzt, wird es möglich, Tracerpartikel verschiedener Größe zu
verwenden.
Die drei Geschwindigkeitskomponenten der Strömung erhält man, indem man eine Erweiterung
der Methode des optischen Flusses verwendet, bei der die vertikale Geschwindigkeitskomponente
aus der zeitlichen Helligkeitsänderung bestimmt wird. Indem dreidimensionale parametrische
Bewegungsmodelle verwendet werden, kann die Schubspannung direkt, also ohne vorherige
Berechnung der Geschwindigkeitsfelder, bestimmt werden.
Hardware und Algorithmik werden auf verschiedene Arten getestet. Eine laminare Rieselfilmströmung dient als Referenz. Das vorhergesagte parabolische Profil dieser stationären Strömung kann mit hoher Genauigkeit rekonstruiert werden. Konvektive Turbulenz dient als Beispiel
einer von Natur aus instationären dreidimensionalen Strömung. Aus Sequenzen, die aus der
Biofluidmechanik stammen, wird direkt die Wandscherrate bestimmt, wobei eine wesentliche
Verbesserung gegenüber konventionellen Methoden deutlich wird.
Abstract
In order to examine the air-water gas exchange, a detailed knowledge is needed about the flow
field within and beneath the water-side viscous boundary layer. Therefore a novel measurement
technique is developed for the spatio-temporal analysis of flows close to free water surfaces.
A fluid volume is illuminated by LEDs. Small spherical particles are added to the fluid, functioning as a tracer. A camera pointing to the water surface from above records the image sequences. The distance of the spheres to the surface is coded by means of a supplemented dye,
which absorbs the light of the LEDs. By using LEDs flashing with two different wavelengths, it
is possible to use particles variable in size.
The velocity vectors are obtained by using an extension of the method of optical flow. The
vertical velocity component is computed from the temporal change of brightness. Using 3D
parametric motion models the shear stress at surfaces can be estimated directly, without previous
calculation of the vector fields.
Hardware and algorithmics are tested in several ways: A laminar falling film serves as reference flow. The predicted parabolic profile of this stationary flow can be reproduced very well.
Buoyant convective turbulence acts as an example for an instationary inherently 3D flow. The
direct estimation of the wall shear rate is applied to sequences recorded in the context of biofluidmechanics, revealing a substantial improvement compared to conventional techniques.
ii
Contents
1 Introduction
1.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.2 Own Contribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.3 Thesis Outline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1
1
3
3
2 Transport of Mass and Momentum at the Air-Water Interface
2.1 Kinematics of Fluids . . . . . . . . . . . . . . . . . . . . . . .
2.1.1 Lagrangian and Eulerian Perspective . . . . . . . . . . .
2.1.2 The Helmholtz Theorem . . . . . . . . . . . . . . . . .
2.2 Dynamics of Fluids . . . . . . . . . . . . . . . . . . . . . . . .
2.2.1 Forces in Fluid Dynamics . . . . . . . . . . . . . . . .
2.2.2 Basic Equations . . . . . . . . . . . . . . . . . . . . . .
2.3 Transport in Fluids . . . . . . . . . . . . . . . . . . . . . . . .
2.3.1 Molecular Transport: Diffusion . . . . . . . . . . . . .
2.3.2 Turbulent Transport . . . . . . . . . . . . . . . . . . . .
2.4 Transport Models . . . . . . . . . . . . . . . . . . . . . . . . .
2.4.1 Film Models . . . . . . . . . . . . . . . . . . . . . . .
2.4.2 Surface Renewal Models . . . . . . . . . . . . . . . . .
2.4.3 Turbulent Diffusion Models . . . . . . . . . . . . . . .
2.4.4 Surface Divergence Models . . . . . . . . . . . . . . .
2.5 Microscale Wave Breaking and Langmuir Circulations . . . . .
2.5.1 Definition and Theory . . . . . . . . . . . . . . . . . .
2.5.2 Experiments . . . . . . . . . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
5
5
5
6
7
7
7
8
9
10
12
12
13
14
16
17
17
19
3 Fluid Flow Analysis
3.1 Measurement Technology for Quantities Related to Flows . .
3.2 Measurement of Velocity . . . . . . . . . . . . . . . . . . .
3.2.1 Thermoelectric Techniques - Hot-Wire Anemometry
3.2.2 Optical Methods - Laser Doppler Anemometry . . .
3.2.3 Acoustic Methods - ADV and ADCP . . . . . . . .
3.3 Image Based Methods . . . . . . . . . . . . . . . . . . . . .
3.3.1 Particle Image Velocimetry . . . . . . . . . . . . . .
3.3.2 Particle Tracking Velocimetry . . . . . . . . . . . .
3.3.3 PIV versus PTV . . . . . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
23
24
26
26
27
27
28
28
31
32
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
iii
Contents
4 Optical-Flow Methods in Fluid Flow Analysis
4.1 Introduction . . . . . . . . . . . . . . . . . . . . .
4.2 Optical Flow Methods . . . . . . . . . . . . . . .
4.2.1 Differential Techniques . . . . . . . . . . .
4.2.2 Frequency-Based Techniques . . . . . . .
4.2.3 Tensor-Based Techniques . . . . . . . . .
4.3 Improvements of Optical Flow Determination . . .
4.3.1 Parameterisation of 2D-Optical Flow fields
4.3.2 Coarse-to-Fine Techniques . . . . . . . . .
4.3.3 Robust Estimation . . . . . . . . . . . . .
4.3.4 Dealing with Brightness Changes . . . . .
4.4 Literature Review . . . . . . . . . . . . . . . . . .
4.4.1 Differential Techniques . . . . . . . . . . .
4.4.2 Frequency-based Techniques . . . . . . . .
4.4.3 Tensor-based Techniques . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
35
35
36
36
40
41
43
43
44
45
46
47
47
48
48
5 Method of Two Wavelengths
49
5.1 Method of One Wavelength . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
5.2 Method of Two Wavelengths . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
5.3 Error Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
6 Hardware Components
6.1 Particles as Tracer . . . . . . . . . . . . . . . . . . .
6.1.1 Scattering Properties . . . . . . . . . . . . .
6.1.2 Particle Characteristics . . . . . . . . . . . .
6.2 Dye as an Absorber . . . . . . . . . . . . . . . . . .
6.2.1 Beer-Lambert’s Law . . . . . . . . . . . . .
6.2.2 Example Spectra of Dyes . . . . . . . . . . .
6.3 LEDs as Light Sources . . . . . . . . . . . . . . . .
6.3.1 Physical Function of High Power LEDs . . .
6.3.2 Selection of LEDS, Used in Our Experiments
6.3.3 Cooling of the LEDs . . . . . . . . . . . . .
6.3.4 Light Sources . . . . . . . . . . . . . . . . .
6.4 Imaging Setup . . . . . . . . . . . . . . . . . . . . .
6.4.1 Telecentric Optics . . . . . . . . . . . . . .
6.4.2 CCD- and CMOS-Cameras . . . . . . . . . .
6.5 Triggering Electronics . . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
57
57
57
59
60
61
62
62
63
64
66
68
69
69
70
72
7 Image Processing
7.1 Preprocessing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7.1.1 Simultaneous Radiometric Calibration and Illumination Correction
7.1.2 Illumination Correction . . . . . . . . . . . . . . . . . . . . . . . .
7.1.3 Background Correction . . . . . . . . . . . . . . . . . . . . . . . .
7.2 Segmentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7.2.1 The Watershed Transformation . . . . . . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
75
75
75
78
79
80
81
iv
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
Contents
7.3
7.4
7.5
7.6
7.7
7.8
7.2.2 Region Growing . . . . . . . . . . . . . . . . . . . . . . . . .
7.2.3 Fit of Gaussians . . . . . . . . . . . . . . . . . . . . . . . . .
Velocity Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3D-Position Estimation . . . . . . . . . . . . . . . . . . . . . . . . . .
Correspondence Analysis and Tracking . . . . . . . . . . . . . . . . .
7.5.1 Finding Correspondences . . . . . . . . . . . . . . . . . . . . .
7.5.2 Particle Tracking . . . . . . . . . . . . . . . . . . . . . . . . .
Postprocessing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7.6.1 Elongation of Trajectories . . . . . . . . . . . . . . . . . . . .
7.6.2 Smoothing of Trajectories . . . . . . . . . . . . . . . . . . . .
7.6.3 Calculation of a Dense Vector Field . . . . . . . . . . . . . . .
Estimation of the Wall Shear Rate using 3D Parametric Motion Models
7.7.1 Parameterization of 3D Physical Flow Fields . . . . . . . . . .
7.7.2 Estimation of the Wall Shear Rate . . . . . . . . . . . . . . . .
7.7.3 Application to Synthetical Data . . . . . . . . . . . . . . . . .
Summary of the Algorithms . . . . . . . . . . . . . . . . . . . . . . .
8 Experimental Results
8.1 Calibration of Depth-Resolution . . . . . . . . . . . . . . . . . . .
8.1.1 Calibration via Linear Positioner . . . . . . . . . . . . . . .
8.1.2 Calibration via a Target . . . . . . . . . . . . . . . . . . . .
8.1.3 Discussion: Calibration of Depth-Resolution . . . . . . . .
8.2 Measurements in a Falling Film . . . . . . . . . . . . . . . . . . .
8.2.1 Laminar Falling Film - Theory . . . . . . . . . . . . . . . .
8.2.2 Laminar Falling Film - Setup . . . . . . . . . . . . . . . . .
8.2.3 Laminar Falling Film - Results . . . . . . . . . . . . . . . .
8.2.4 Simulation of a Laminar Falling Film . . . . . . . . . . . .
8.2.5 Discussion: Laminar Falling Film . . . . . . . . . . . . . .
8.3 Measurements in a Convection Tank . . . . . . . . . . . . . . . . .
8.3.1 Turbulent Convection - Theory . . . . . . . . . . . . . . . .
8.3.2 Turbulent Convection - Setup . . . . . . . . . . . . . . . .
8.3.3 Turbulent Convection - Results . . . . . . . . . . . . . . . .
8.3.4 Discussion: Turbulent Convection . . . . . . . . . . . . . .
8.4 Application to Sequences acquired in Context of Biofluidmechanics
8.4.1 Medical Background and Motivation . . . . . . . . . . . .
8.4.2 Measurement Setup at the Charité . . . . . . . . . . . . . .
8.4.3 Results of our Analysis . . . . . . . . . . . . . . . . . . . .
8.4.4 Discussion: Biofluidmechanics . . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
82
83
85
88
88
88
89
90
91
92
93
94
94
95
96
96
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
99
99
99
100
101
102
103
105
106
109
109
111
111
114
115
117
121
121
121
123
126
9 Summary and Outlook
129
9.1 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
9.2 Outlook . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130
v
Contents
A Total Least Squares
A.1 Ordinary Least Squares . . . . . . . . .
A.2 Total Least Squares . . . . . . . . . . .
A.3 Weighted Total Least Squares . . . . . .
A.4 Equilibrated Total Least Squares . . . .
A.5 Example: Fitting a Straight Line . . . .
A.6 TLS Estimates from Normal Equations .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
131
131
132
132
133
133
134
B The Lie Group of Continuous Transformations
137
B.1 Generalization of the Affine Subgroup in 2D . . . . . . . . . . . . . . . . . . . . 137
B.2 Generalization of the Affine Subgroup in 3D . . . . . . . . . . . . . . . . . . . . 138
vi
1 Introduction
1.1 Motivation
During the last two centuries the atmospheric carbon dioxide concentration has increased by
100 ppm to the today’s value of 380 ppm. However, the carbon dioxide stemming from the burning of fossil fuel exceeds this amount by about 100%. The results of [Sabine et al., 2004] show,
that the oceans store a large fraction of the anthropogenic carbon dioxide. [Le Quéré et al., 2003]
quantifies the global ocean sinks for the last two decades using recent atmospheric inversions and
ocean models (see Fig. 1.1). For the 1990s the mean global oceanic sink is 1.9 Pg C yr−1 , but an
uncertainty on the mean value of the order of ±0.7 Pg C yr−1 remains1 . The high variations in
the predictions of the models are partly caused by insufficient understanding of the mechanics of
air-sea gas exchange.
8
fossil fuel emissions
annual atmospheric increase
7
PgC/yr
6
5
4
3
2
1
0
1960
1970
1980
1990
2000
Year
Figure 1.1: Top: Annual fossil fuel emissions (solid line) and annual increase of atmospheric carbon
dioxide (dotted line). Bottom: Global oceanic CO2 sink computed using different models. The central
ocean model estimate was forced with atmospheric CO2 concentration from ice cores before 1970 and from
direct measurements after 1970, where an error of 20% is assumed. The coloured boxes represent estimates
by various researchers using differing methods (from [IPCC, 2001] (top) and from [Le Quéré et al., 2003]
(bottom))
1
1 Introduction
Figure 1.2: Air-sea interaction is governed by physical, chemical and biological mechanisms. Hydrodynamical processes at or near the surface include momentum transport by wind stress, wave formation and
-breaking and Langmuir circulations (from SOLAS2 ).
The air-water interface is the “bottleneck” for the gas transfer between oceans and atmosphere.
Many relevant gases (like carbon dioxide or methane) are of low solubility in water, so that their
transfer resistance resides in the water-side. The transfer resistance, or its inverse, the transfer
velocity characterises the rate of gas exchange. In order to model the world-wide annual gas flux
there has to be chosen a parametrisation, which relates the transfer velocity k to meteorologically
accessible quantities like wind speed or slope of the water surface. The experiments of [Jähne,
1985] yield as a suitable parameterisation:
k = β −1 u∗ Sc−n ,
(1.1)
where β is a nondimensional scaling factor, u∗ is the friction velocity, which is a measure for the
momentum transported by the wind into the water bulk, and Sc = ν/D is the nondimensional
Schmidt-number, which quantifies a certain substance according to the ratio of its viscosity ν to
its diffusivity D. The Schmidt-number exponent n characterises the state of the water surface. It
turns out, that the presence of a smooth surface implies n = 2/3 whereas a wavy surface requires
n = 1/2. Waviness can be characterised by the mean square slope of the water surface (which
is accessible by radar backscattering methods on a global scale) much better than by the average
wave speed.
In order to find out the physical processes for this interrelationship, one has to consider the
hydrodynamics of the water-side flow. Whereas in large scales mass, heat or momentum are
1
2
2
1 Pg are 1012 kg
Surface Ocean - Lower Atmosphere Study; http://www.uea.ac.uk/env/solas/
1.2 Own Contribution
transported by turbulent eddies much more efficiently than by molecular diffusion, in the small
scale of the viscous boundary layer transport by diffusion is the dominant process. One critical
parameter for this transport is the boundary layer thickness, which may be variable in space
and time. Some models propose, that this variability is affected by surface convergences and
divergences, which in turn are induced by subsurface turbulence generated by wind stress, (micro)
breaking waves or (micro) Langmuir circulations. In extreme cases, the whole surface may be
renewed. Today it is generally accepted, that all of these models yield the correct Schmidtnumber dependency in Eq. (1.1), although they differ in the vertical profiles of concentration and
momentum.
During the last two decades image based velocity measurement methods have been developed,
which are capable to analyse the surface flow and parts of the internal flow structure of the viscous
boundary layer and of surface waves. Most of these methods are restricted to a two-dimensional
plane, which typically is aligned into streamwise direction. Because both turbulence, waves and
Langmuir circulations are inherently three-dimensional phenomena, clearly a two-dimensional
setup is insufficient. Furthermore, we are interested in the flow very close to the boundary, which
may be embedded in a wavy surface. The presence of a phase interface poses a further great
challenge to flow measurement.
1.2 Own Contribution
In this thesis a novel method for the spatio-temporal analysis of flows close to the water surface
is developed. Following new contributions are presented:
Measurement in 3D-space We adapt the idea of [Debaene, 2005], which is suited for measuring the wall-near flow in the context of biofluidmechanics, and generalise it for the use of
polydisperse seeding particles. Thus we are able to locate the depth of a tracer particle.
Measurement of 3C-velocities Using extended optical-flow based methods, we are capable
to extract all three components of the Eulerian velocity vector field. Furthermore by implementing particle tracking (PTV), we obtain its Lagrangian representation [Jehle and Jähne,
2006b,c].
Direct estimation of the wall shear rate The water-side wall shear rate, which together
with the viscosity characterises the stress at the interface, can be estimated directly without
previous computation of the velocity vector field [Jehle and Jähne, 2006a].
Both, measurement setup and algorithmics are tested in several ways: By applying the technique to a well known laminar flow its accuracy could be evaluated. Adopting it to the more
complicated case of heat-driven turbulence, its feasibility for the application in instationary inherently 3D flows could be demonstrated. The direct estimation of the wall shear rate was applied
to sequences recorded in the context of biofluidmechanics.
1.3 Thesis Outline
The thesis is structured into seven major chapters. Chapter 2 is concerned with summarizing
the physical foundations required for understanding the transport of mass and momentum at the
air-water interface. Apart from classical fluid dynamics, some transport models are sketched
3
1 Introduction
and microscale wave breaking and Langmuir circulations are described. Previous attempts to
visualise these phenomena are reviewed.
Chapter 3 presents the techniques to measure physical quantities in fluid. Special attention is
drawn onto velocity measurement, such like thermoelectric, optical, acoustic and image-based
methods. PIV and PTV belong to the latter ones; they are explained in more detail.
Another method, which has become of interest in fluid flow measurement during the last years
is the optical-flow based approach presented in chapter 4. Optical-flow methods can be classified in differential, frequency-based and tensor-based techniques. Naive application often leads
to difficulties. We will sketch some approaches to overcome them. The chapter closes with a
literature review concerning the application of optical-flow based techniques to hydromechanical
problems. This chapter is a shortened version of [Jehle et al., in preparation].
Chapter 5 describes the “heart” of the measurement technique, presented in this thesis: The
reconstruction of a fluid particle’s 3D-position exploiting Beer-Lambert’s law and illuminating
with light of two wavelengths. The errors of this technique are investigated.
The hardware components are characterised in detail in chapter 6. A proper selection of tracer
particles, dye and light emitting diodes is essential in particular for the measurement method
presented here. Like in all quantitative fluid visualisation techniques, optics and cameras have to
be adjusted to the experiment.
We make extensive use of image processing techniques. Chapter 7 explains the various steps,
the raw image sequences have to pass until the Eulerian or Lagrangian flow field respectively the
wall shear rate can be extracted. The determination of both the three-component velocity vector
field and the wall shear rate relies on the tensor-based methods, which have been introduced in
chapter 4.
Experimental results are given in chapter 8. Two methods of calibrating the determination of
a particle’s depth are investigated. Both, measurements in a falling film and in a convection tank
were conducted. Moreover our method of direct determination of the wall shear rate is applied
to sequences recorded in the context of biofluidmechanics. To each of the experiments a short
introduction to the theory and a summary of the experimental setup is given.
4
2 Transport of Mass and Momentum at the
Air-Water Interface
2.1 Kinematics of Fluids
In this section we will address some issues regarding the kinematics of fluids, which will be relevant for our analysis. Kinematics is known as the branch of mechanics that deals with quantities
involving space and time only. In contrast to dynamics, kinematics describes motion, it does not
try to explain it (by forces for instance).
2.1.1 Lagrangian and Eulerian Perspective
There are two alternative representations of fluid motion: The Eulerian description gives information about what happens at a fixed spatial point in space; the Lagrangian description follows
an individual fluid particle on its way through space.
The Eulerian perspective
A scalar-, vector- or tensor-valued quantity F , like temperature, velocity or the diffusion tensor,
is measured (or calculated) at a fixed position x at a given time t: F (x, t).
The partial derivative
∂F (x, t)
(2.1)
∂t
yields the local rate of change of the quantity at a point x and at a time t and gives not the total
rate of change as seen by the fluid particle.
We define the quantity u = ∂x/∂t to be the Eulerian velocity field. The tangents to the
Eulerian vectors are called streamlines.
The Lagrangian perspective
Here an individual fluid particle is followed, which is located at the position x0 at the time t = 0.
Thus the value of an arbitrary quantity can be specified by F (x0 , t) for t > 0. The time t can be
considered as the parameter of a trajectory, which forms the pathline of the fluid particle.
We can calculate the total rate of change of the quantity along its way on the trajectory by using
the chain rule:
dF (x, t)
∂F (x, t) dx
=
+
∇F.
(2.2)
dt
∂t
dt
Literally speaking, this total rate of change (material derivative, substantial derivative, particle
derivative) of a quantity F is composed of two parts:
• The local rate of change of F at a given point x and the
5
2 Transport of Mass and Momentum at the Air-Water Interface
• advective derivative of the quantity as a result of its motion u∇F , which is a product of the
spatial gradient of the quantity ∇F and the velocity u = dx/dt of the fluid particle.
Thus, a quantity, say temperature, of a fluid particle can change, because the whole temperature
field is changing, or it can change, because it is just moving around (maybe it moves into a hotter
region). For the latter, Eq. (2.1) may be zero but Eq. (2.2) is different from zero, because both,
the gradient of the temperature and the fluid particle’s velocity are not zero.
2.1.2 The Helmholtz Theorem
We can extend the infinitesimal velocity vector u = (u1 , u2 , u3 )T of a fluid element, which is
located at the position x = (x1 , x2 , x3 ), up to first order by expanding it into a Taylor series:
ui (xj , t) ≈ ui (xj,0 ) +
∂ui
∂ui
xj +
t with
∂xj
∂t
i, j = 1 . . . 3.
Here ui (xj , 0) is its translation, (γij ) = ∂ui /∂xj is known as the velocity gradient tensor, and
∂ui /∂t is its acceleration.
The velocity gradient tensor can be decomposed into a symmetrical (sij ) and an antisymmetrical (aij ) part:
1
1
γij = (γij + γji ) + (γij − γji ) ≡ sij + aij ,
2
2
which reads in matrix-notation:
 


∂u1 /∂x1
s12
s13
0
−a21 a13
.
s12
∂u2 /∂x2
s23
0
−a32  + 
(γij ) =  a21
s13
s23
∂u3 /∂x3
−a13 a32
0
(2.3)
The different parts of (γij ) can be identified with different types of affine transformations:
Rotation The first matrix in Eq. (2.3) is a rotation matrix - their components can be identified
with half of the rotation vector:
ω = (∇ × u) = (∂/∂x1 , ∂/∂x2 , ∂/∂x3 )T × u ≡ 2(a32 , a13 , a21 )T .
Dilation The trace of the second matrix in Eq. (2.3) represents the volume dilation ΘV , which
is the divergence of the vector field
ΘV = ∂u1 /∂x1 + ∂u2 /∂x2 + ∂u3 /∂x3 = ∇ · u.
Shear: The non-diagonal elements of the second matrix in Eq. (2.3) are half of the shear strain
rates, which are defined by the rates of decrease of the angle formed by two mutually perpendicular lines on the element.
We have shown, that every infinitesimal motion of a fluid volume element can be decomposed
into a translation, a rotation and a deformation. The latter consists of dilation and shearing of the
element. This is know as the Helmholtz theorem ([Helmholtz, 1858]).
6
2.2 Dynamics of Fluids
2.2 Dynamics of Fluids
2.2.1 Forces in Fluid Dynamics
The forces acting on a fluid element can be divided into three classes: body forces, surface forces,
and line forces.
Body forces: Body forces result from the medium being placed in a force field. Examples are
the gravitational or electric force. They have in common, that they are proportional to the
mass of the body, they act on. An example is the gravitational force F :
F = mg
resp. f = ρg,
with g being the gravitational constant. The introduction of a force density f , i. e. force per
unit volume, is appropriate in continuum mechanics.
Surface forces: Surface forces are proportional to the area, they act on. They can be resolved
into components normal and tangential to the area. The i-component of the surface force per
unit volume of an element is
∂τij /∂xj ,
where (τij ) is the stress tensor. The first index of this symmetric tensor i indicates the direction of the normal to the surface on which the stress is considered, and the second index
j indicates the direction in which the stress acts. The diagonal elements (i = j) of the
stress tensor are the normal stresses, and the off-diagonal elements are the tangential or shear
stresses.
The relation of the stress and the deformation in a continuum is called a constitutive equation.
In the case of a Newtonian, incompressible fluid, the stress is related to the pressure p and the
strain sij = 1/2(∂ui /∂xj + ∂uj /∂xi ) according to
τij = −pδij + 2µsij ,
(2.4)
where µ is the dynamical viscosity.
Line forces: Line forces are proportional to the extent of the line they act along. An example
is the surface tension force. This kind of force does not appear directly in the equations of
motion, but enters in the boundary conditions.
2.2.2 Basic Equations
All fluid mechanics is based on the conservation laws for mass, momentum and energy. In the
following, we will present these laws in differential form.
Conservation of mass - continuity equation
Conservation of mass is expressed by the continuity equation, which reads in its general form:
∂ρ
+ ∇ · (ρu) = 0.
(2.5)
∂t
For incompressible fluids (i. e. for liquids and gases with speeds considerably smaller than the
speed of sound), we can regard ρ as constant, so that the continuity equation becomes:
∇ · u = 0.
7
2 Transport of Mass and Momentum at the Air-Water Interface
Conservation of momentum - Navier-Stokes equation
Newton’s law is applied to an infinitesimal fluid element:
ρ
∂τij
dui
= ρgi +
.
dt
∂xj
By inserting the constitutive equation Eq. (2.4), we obtain the Navier-Stokes equation, which
reads in matrix-vector notation:
∂u
du
=
+ u∇u = ρg − ∇p + µ∇2 u.
(2.6)
ρ
dt
∂t
We can rewrite the Navier-Stokes equation by introducing non-dimensional quantities ũ = u/V0 ,
x̃ = x/L0 , t̃ = tV0 /L0 and p̃ = p/(ρV02 ) using a typical velocity scale V0 and a typical length
scale L0
dũ
1
˜ 2 ũ,
˜ + 1 ∇
= 2 g/g − ∇p̃
Re
Fr
dt̃
where we have introduced the Reynolds-number Re = V0 L0 ν −1 , which characterises the relative strength of inertial and viscous forces, and the Froude-number Fr = V0 (L0 g)−1/2 , which
quantifies the relative importance of inertia and gravity forces.
In the following two limits of the Navier-Stokes equation will be discussed:
Viscous flow (Re ≪ 1): The inertia term in Eq. (2.6) is negligible compared to the viscous
term. This results in the Stokes equation:
∂u
= ρg − ∇p + µ∇2 u.
∂t
Inviscid flow (Re ≫ 1): The viscous term can be neglected in Eq. (2.6). This results in the
Euler equation:
du
ρ
= ρg − ∇p.
dt
We can derive a mechanical energy equation directly from the momentum equation (see [Kundu,
1990]).
ρ
Conservation of thermal energy - Heat equation
Under several restrictions including that in which the flow speeds are small compared to the
speed of sound and in which the temperature differences in the flow are small, the heat equation
becomes
dT
= DH ∇2 T,
dt
with DH being the thermal diffusion.
2.3 Transport in Fluids
In this section we will consider the transport of any substantial addition in fluids. Such an addition can be mass (for example a trace gas like carbon dioxide), momentum or heat. In the
following formulas we will denote the addition by the symbol c, which represents the concentration, the momentum (here c is vector-valued) or the heat.
8
2.3 Transport in Fluids
2.3.1 Molecular Transport: Diffusion
Diffusion in gases can be considered as a result of the Brownian motion of molecules. Mathematically stationary diffusion can be described by Fick’s first law:
j = −D∇c,
(2.7)
where j is the flux density and D is the diffusion constant. In order to fulfill stationarity it is
assumed, that the concentration of mass, heat or momentum in the reservoirs is sufficiently large,
so that it stays constant over the regarded time interval.
For instationary processes, we have to take care, that we satisfy the continuity equation ∂c/∂t+
∇j = 0. Combining this with Eq. (2.7), we arrive at Fick’s second law:
∂c
= D∇2 c.
∂t
(2.8)
Here we assumed homogeneous, isotropic diffusion. In the general case D is a tensor of second
order, so that Equation (2.8) has to be rewritten to:
∂c
= ∇(D∇c).
∂t
Until now we didn’t specify the type of tracer, which is represented by the symbol c:
• The mass transport can be described by the flux-density
j m = −D∇c,
here c is the concentration of the material inside the fluid, which can be specified in mol/l for
instance. D is the diffusion constant (with units m2 s−1 ), a specific property of the considered
material.
• Heat transport can be described by the flux-density
j H = −DH ∇(ρcP T ) = −λ∇T,
here the concentration is replaced by ρcP T . If we regard the density ρ and the specific heat cP
as independent from position, we see, that the flux density is proportional to the temperature
gradient. The proportionality constant is the diffusion constant for heat, which depends on
the coefficient of thermal conductivity λ in the way DH = λ(ρcP )−1 . Again this diffusion
constant has units m2 s−1 .
• Momentum transport can be described by the flux-density
j p = −ν∇(u),
where the flux-density is related to the shear stress τ via j = τ /ρ and the diffusion constant
for momentum is the kinematic viscosity ν. Thus for incompressible media (with constant ρ)
the shear stress is proportional to the velocity gradient. Here the kinematic viscosity has units
m2 s−1 , too.
9
2 Transport of Mass and Momentum at the Air-Water Interface
2.3.2 Turbulent Transport
Now we consider, that tracers can be transported both by diffusion and convection. In order to do
so, we have to regard the total differential of the concentration change (here again “concentration”
stands for mass, heat and momentum):
dc
∂c
=
+ u∇c = D∇2 c
dt
∂t
(2.9)
Fick’s second law Equation (2.8) becomes the general transport equation, which is linear in
c if the diffusion constant D is not a function of concentration. The only unknown in Equation (2.9) is the velocity vector field u, which can be obtained by solving the Navier-Stokesequation Eq. (2.6).
Reynolds decomposition
From now on we move over to turbulent flows, which are the norm, rather than the exception in
environmental fluid dynamics (consider the atmosphere or the ocean, for instance). We perform
the Reynolds decomposition of the velocity and concentration fields into mean (uppercase letters)
and fluctuating (primed lowercase letters) quantities:
u = U + u′
and c = C + c′ .
We average the whole transport equation Eq. (2.9):
∂/∂t(C + c′ ) + (U + u′ )∇(C + c′ ) = D∇2 (C + c′ ),
which becomes (using Einstein’s summation convention)
´
∂C
∂2
∂ ³
CUi + c′ u′i = D 2 C,
+
∂t
∂xi
∂xi
where we introduced the Reynolds stress c′ u′i which is essentially the covariance between c′ and
u′i . In the case of simple parallel shear flow with a positive mean velocity- and concentration
gradient ∂U (z)/∂z > 0 and ∂C(z)/∂z > 0, the velocity fluctuations tend to balance the flow in
the way, that the concentration gradient is flattened, i.e. we have c′ w′ < 0 in this case.
1D-case
We will consider the 1D-case of a homogeneous flow U = U (z), where we have a mean velocity
field (U, 0, 0) and a fluctuating velocity field (u′ , v ′ , w′ ). From the assumption of homogeneity it
follows, that c′ u′ = c′ v ′ = 0, so that the Reynolds-averaged transport equation becomes:
∂C
∂2
∂
∂ ′ ′
c w = D 2 C.
+
(CUi ) +
∂t
∂xi
∂z
∂z
Because of homogeneity we can set ∂/∂xi (CUi ) to zero, so that we arrive at
¶
µ
∂C
∂
∂C
′
′
=
−cw
D
∂t
∂z
∂z
10
(2.10)
2.3 Transport in Fluids
for concentration (general) and
∂
∂U
=
∂t
∂z
µ
∂U
ν
− u′ w ′
∂z
¶
(2.11)
for momentum (in particular). The latter is known as Reynolds-equation in the literature.
Linear-logarithmic velocity profile
Now we will consider a stationary 1D solution. The transport term in Eq. (2.11) becomes a
constant flux density for momentum:
jp = ν
∂U
∂U
− u′ w ′ =
(ν + Kt ) = jdiff + jturb .
∂z
∂z
(2.12)
Here the first term ν∂U/∂z is due to the molecular transport and may be described using a
(molecular) diffusive flux density and the second term u′ w′ is due to turbulence and may be described using a turbulent flux density, which can be expressed using a turbulent diffusion constant
for momentum Kt .
There have to be fulfilled two assumptions for the turbulent diffusion constant approach:
1. The size of the eddies has to be small compared to the dimensions of the system, and
2. turbulence has to be homogeneous and isotropic.
Clearly the first assumption is not fulfilled, because the eddy-size (which is a measure for the
mean free path in turbulence) can reach the dimensions of the system. Now the big question is:
Which eddys are relevant to the process, great ones, or small ones?
To find an expression for the velocity profile U (z), we have to rearrange Equation (2.12), and
integrate:
Z
jp
dz ′
τ z
∂U
=−
⇒ U (z) = −
where U (0) = 0.
(2.13)
∂z
ρ(ν + Kt (z))
ρ 0 ν + Kt (z ′ )
Based on intuition we have (unlike the case of molecular diffusion) a clear spatial dependency
of Kt . Using the definition of the friction velocity τ = ρu2∗ , we can split the integral into two
parts: one outside the boundary layer (where Kt ≫ ν) and one inside the boundary layer (where
ν ≫ Kt ). The integral outside the boundary layer reads:
Z z1
dz ′
2
U (z1 ) − U (z0 ) = u∗
(2.14)
′
z0 Kt (z )
A guess for spatial dependency of the turbulent diffusion constant outside of the viscous boundary
layer would be
Kt (z) = κu∗ z,
(2.15)
with the von Karman constant κ ≈ 0.41. Here the turbulent diffusion constant is directly proportional to the distance to the wall z. By plugging this into Equation (2.14) we get
Z
u∗ z1 dz ′
u∗
U (z1 ) − U (z0 ) =
=
(ln z1 − ln z0 ).
′
κ z0 z
κ
11
2 Transport of Mass and Momentum at the Air-Water Interface
This is the logarithmic profile, which can be made nondimensional in the dependent and independent variables by applying the transformations
u+ ← U (z)/u∗
and
z+ ← u∗ z/ν,
so that we arrive at
u+ = (ln z+ )/κ + const.
(2.16)
To obtain an expression for the profile near the wall, we use the definitions of shear stress (near
the wall) and friction velocity:
·
¸
∂U
τ
u2
u2 z
=
= ∗ ⇒ U = ∗ ⇒ u + = z+ .
(2.17)
∂z z=0 ρν
ν
ν
We see, that the profile in the viscous sublayer (where the viscosity is large compared to the
turbulent diffusion) becomes linear. The universal law of the wall is plotted in Fig. 2.1.
Implications on gas transfer
Until now we have addressed the transport of momentum. What are the implications for the
transport of mass, for example? We have (similar to Equation (2.13)) an expression for the mean
mass-concentration C depending on the wall-distance z:
Z z
dz ′
+ C0 ,
(2.18)
C(z) = j
′
0 D + Kc (z )
where we used the turbulent diffusion for concentration Kc , which depends on z, too. Here we
introduce the transfer velocity k ≡ j/∆c, which is known as the “piston velocity”: Visually, k
specifies the velocity, concentration is pushed through the boundary layer into the under- resp.
overlying bulk region.
2.4 Transport Models
2.4.1 Film Models
Film models (e. g. in [Liss and Slater, 1974]) are based on the assumption, that there is no turbulence in the viscous boundary layer. Contrary to that, outside of the boundary layer there is high
mixing, so that there is no concentration gradient.
We can define a typical thickness for the boundary layer z∗ by intersecting the linear velocity
profile Eq. (2.17) and the logarithmic velocity profile Eq. (2.16). We find z+ ≈ 11, i. e. z∗ ≈
11ν/u∗ . By plugging this into Fick’s first law Eq. (2.7), we get an expression for the mass flux
density, which depends on the concentration difference and on the boundary layer thickness:
jm = −D
∂c
∆c
= −D
.
∂z
z∗
In order to get an expression for the transfer velocity, we divide the flux density by the concentration difference:
jm
D
Du∗
1
k=
=
=
= Sc−1 u∗ .
(2.19)
∆c
z∗
11ν
11
12
2.4 Transport Models
Figure 2.1: Universal law of the wall. The non-dimensional velocity u+ = U/u∗ is plotted semilogarithmically relative to the non-dimensional wall distance z+ = u∗ z/ν. For small distances (z+ < O(5)),
there is a linear, for larger distances (z+ > O(30)) there is a logarithmic dependence. From [Kundu,
1990]
Recapulitating: If the concentration profile is known, so the boundary layer thickness is known,
so the transfer velocity is known (with known diffusion constant).
In the last step of Eq. (2.19) we used the definition of the nondimensional Schmidt-Number
for gases, which is viscosity of the fluid divided by molecular diffusion constant: Sc = ν/D.
For most gases, Sc is in the range of 200 to 3000. Experiments show, that the film model is
totally unreal, because the measured transfer velocities exceed the theoretically derived by orders
of magnitude.
2.4.2 Surface Renewal Models
A completely different approach is made in the surface renewal models (see [Danckwerts, 1951;
Higbie, 1935]). Here the basic idea is, that large eddies rebuild the boundary layer in a statistical sense. We can incorporate that by assuming the divergence of the Reynolds stress for
concentrations ∂/∂z(c′ w′ ) in Eq. (2.10) to be directly proportional to the concentration itself.
Dimensional reasoning shows, that the proportionality constant has to be an inverse time, which
is called renewal rate τ −1 . Then the stationary transport equation reads
D
∂2c
− τ −1 c = 0,
∂z 2
(2.20)
which is a second order ordinary differential equation, and can
√ be solved by the ansatz c =
c0 exp(−z/z∗ ). We get for the boundary layer thickness z∗ = Dτ .
13
2 Transport of Mass and Momentum at the Air-Water Interface
Z
Z
jm
ca
jp
cw =α ca
molecular boundary layer
(air-side)
c
molecular boundary layer
(water-side)
viscous boundary layer
(air-side)
u
viscous boundary layer
(water-side)
Figure 2.2: Schematical illustration of boundary layers at the water surface. Left: Because the solubility
of a gas in water is not equal to unity, there exists a discontinuity of the concentration profile. Right: The
velocity profile is continuous. The thickness of the molecular boundary layer is O(100 µm), the thickness
of the viscous boundary layer is O(1 mm) (from [Degreif, 2006]).
Because τ merely can depend on the viscosity and on the friction velocity, we have by dimensional reasoning:
ν
τ (u∗ , ν) = β 2 2 ,
u∗
where we have introduced the proportionality constant β 2 . We can plug that into the nondimensional form of Eq. (2.20):
D
Sc
cu2
∂2c
∂ 2 c u2∗
− 2 ∗ = 0 ⇒ 2 − 2 c = 0,
2
2
∂z ν
β ν
β
∂z+
where we have used the nondimensional wall distance z+ = zu∗ /ν. By fitting the solution of
this equation c = c0 exp(−Scz+ /β) to the universal velocity profile, we get β ≈ 16, so that we
can determine expression for the transfer velocity:
r
r
u∗ D
1
D
D
=
=
= Sc−1/2 u∗ .
k=
z∗
τ
β
ν
16
Considering Schmidt numbers of order 1000 we see, that the surface renewal model yields much
higher transfer velocities than the film model, because the Schmidt number exponent is only 1/2,
in contrast to the film model, where it is unity.
2.4.3 Turbulent Diffusion Models
In contrast to the previous models, where heuristic assumptions were made, turbulent diffusion
models and surface divergence models are hydrodynamically more founded. Here we will sketch
the derivation of [Coantic, 1986].
14
2.4 Transport Models
Starting point of the turbulent diffusion models is the solution of the 1D-stationary Reynoldsaveraged transport equation Eq. (2.18)). To integrate this equation, we have to find an expression
for the turbulent diffusion constant containing z. The turbulent diffusion constant is by definition
(Eq. (2.12)) proportional to the Reynolds stress Kt ∝ u′ w′ , where the proportionality constant is
−(∂U/∂z)−1 .
Now we expand the Reynolds stress into a Taylor series for small distances z from the surface:
· 2
·
¸
¸
∂ ′ ′
∂ ′ ′
′
′
z+
z2.
uw ≈0+
uw
uw
2
∂z
∂z
z=0
z=0
For the first derivative of the Reynolds stress at the surface we have
·
∂ ′ ′
uw
∂z
¸
=
z=0
∂w′
∂u′ ′
w + u′
.
∂z
∂z
Because w′ = 0 the first term vanishes. The second term vanishes at a rigid surface, too, because
there u′ is zero, also. At a free surface, we have u′ 6= 0 and w′ 6= 0 in general, but we may have
∂w′ /∂z = 0. This is exactly the case, if the 2D-continuity ∂u′ /∂x + ∂v ′ /∂y = 0 holds. If we
calculate the second derivative of the Reynolds stress:
·
∂2 ′ ′
uw
∂z 2
¸
=
z=0
∂u′ ∂w′
∂ 2 w′
+ u′ 2 ,
∂z ∂z
∂z
we find, that both, first and second term vanishes in the rigid surface case (because, there ∂w′ /∂z =
u′ = 0), but surely this not happens in the free surface case, because there both u′ and ∂ 2 w′ /∂z 2
are definitely not zero.
Recapitulating, in the rigid surface case we require, that the second derivative of the Reynolds
stress has to vanish, so that we have for the turbulent diffusion constant:
Kt (z) = α3 z 3 + O(z 4 )
for a rigid surface.
In the free-surface-case the second derivative does not disappear, so we can apply:
Kt (z) = α2 z 2 + O(z 3 )
for a free surface.
Because the resistance for gas transfer resides mainly in the boundary layer, we let z go to
infinity, so that the transfer velocity becomes
k=
jm
C − C0
µZ
0
∞
dz ′
D + αn z ′p
¶−1
,
where we plugged in the expressions for the turbulent diffusion constants for the free-surfacecase (p = 2), and the solid-surface-case (p = 3). After nondimensionalising and some simple
manipulations, we get
ÃZ
!−1
′
∞
dz+
u∗
k=
,
′p
Sc
1 + αn z+
/D
0
15
2 Transport of Mass and Momentum at the Air-Water Interface
which can be integrated to
1
k = βp−1 Sc p
−1
u∗
which holds exactly for Sc → ∞. In particular we find
k = β2−1 Sc−1/2 u∗
k = β3−1 Sc−2/3 u∗
for a free surface (p = 2), and
for a solid surface (p = 3)
The free-surface case is reproduced by the surface renewal model k = 1/16Sc−1/2 u∗ , whereas
the Schmidt number exponent forecasted by the film model k = 1/11Sc−1 u∗ is even to low to
match with the turbulent-diffusion model for a solid surface.
[Deacon, 1977] integrated Eq. (2.18) numerically using an analytical expression for Kt (z+ )
(given by [Reichardt, 1951]) to obtain more exact expressions for the transfer velocity, which
depend on the magnitude of the Schmidt number.
2.4.4 Surface Divergence Models
Surface divergence models relate hydrodynamical quantities (i. e. surface divergence) to the
boundary layer thickness and to the transfer velocity. Their attraction originates from the fact,
that surface divergences can be measured directly or can be related to phenomena like microscale
wave breaking.
From the continuity equation it follows at the surface:
µ
¶
∂u ∂v
w = −z
+
,
∂x ∂y
where we have assumed u and v being constant in the thin mass boundary layer. Where the
horizontal flow is divergent, the vertical velocity is upward, so that the mass boundary layer is
squeezed by advection. In the region of surface convergence, the vertical velocity is downward,
and the mass boundary layer is pulled down. Thus the thickness of the mass boundary layer, and
thus the transfer velocity is controlled by the surface divergences.
Free surface stagnation point flow
In the following we will discuss the model for a simple free-surface stagnation point flow, which
can be described by
u = ax, v = −ax,
where a is the divergence of the horizontal velocity field. The stationary transport equation
(Eq. (2.9)) reads (for simplicity, we consider only one horizontal dimension)
µ 2
¶
∂c
∂c
∂2c
∂ c
u
+w
=D
+
.
∂x
∂z
∂x2 ∂y 2
There is no horizontal concentration gradient at the surface ∂c/∂x = 0, and the horizontal diffusion is negligible ∂ 2 c/∂x2 = 0, so that the concentration profile is determined by
µ r
¶
a
c(z) = c0 erfc z
.
2D
16
2.5 Microscale Wave Breaking and Langmuir Circulations
The transfer velocity at the surface is given by
· ¸
∂c
r
D
∂z z=0
2aD
Sc−1/2
j0
=
=
= 0.623u∗ √
,
k=
c0
c0
π
Re
with the Reynolds number Re = u2∗ /aν.
Rigid surface stagnation point flow
Having the boundary condition of a rigid surface, the 2D-flow field becomes
u=
∂u
∂w
=
= 0,
∂x
∂z
w = wBL Φ(η),
√
with wBL = nua. Here Φ(η) is a function describing the “Hiemez flow”, the flow field around
s stagnation point in the rigid-surface case [Schlichting, 1960]. In the case of large Schmidtnumbers, we can solve the transport equation, yielding the transfer velocity at the surface:
Sc−2/3
.
k = 0.66u∗ √
Re
Evidently, these very idealised assumptions lead to the correct Schmidt-number-exponents.
[Csanady, 1990] elaborated an extension to this model, applying results of measurements in short
wind waves (see section 2.5.2).
2.5 Microscale Wave Breaking and Langmuir Circulations
Unless now we only considered extremely simplified models of transport of mass and momentum
through a free surface. The major difficulty is the presence of a free-surface boundary condition.
In interaction with the wind stress this can lead to the formation of shear flow, (breaking) waves
and Langmuir circulations. All of these nonlinear phenomena generate turbulence, which is
responsible for creating surface divergences, and thus for thinning or renewing the mass boundary layer, which affects the resistance for the gas transfer. While the surface of the waves is
measurable relatively easy (see for example [Balschbach et al., 2001; Jähne et al., 2005]), it is
considerable more difficult to obtain quantitative information about their internal flow field. In
this section we will recapitulate some facts about linear and nonlinear wave theory, have a closer
look to microscale wave breaking and Langmuir circulations and have a tour through historical
and current attempts to obtain knowledge about the near-surface flow. Regarding the dynamics
of surface waves we suggest reading [Phillips, 1969]; for an overview over the basic mechanisms
of the interaction of atmosphere and ocean see [Kraus and Businger, 1994].
2.5.1 Definition and Theory
Linearised wave theory
The linearised wave theory assumes a sine-shaped boundary condition at the air-water interface:
η = a cos(kx − ωt),
17
2 Transport of Mass and Momentum at the Air-Water Interface
with a functional interrelation between the wave frequency ω and the wave number k, called
dispersion relation, which reads
p
ω = gk tanh(kH),
depending on the water depth H. In its deep-water approximation the linearised wave theory
predicts particle orbits describing circles, whose size is damped exponentially with increasing
distance from the surface. The applicability of linear wave theory requires a ≪ H and ka ≪ 1.
The latter implies, that the slope of the sea-surface is small.
Stokes theory
In a dispersive system, like in deep water, an accumulation of linear waves is prevented, because
the different Fourier components propagate at different speeds and become separated from each
other. Nonlinear steepening could cancel out the dispersive spreading, resulting in finite amplitude waves of constant form.
[Stokes, 1847] showed, that periodic waves of finite amplitude are possible in deep water.
The surface elevation of irrotational waves in deep water is given by a power series of the small
parameter ak, the wave steepness:
¶
µ
3
1
2
η = a cos k(x − ct) + ka cos 2k(x − ct) + (ka) cos 3k(x − ct) + . . . ,
2
8
with the dispersion relation
ω=
p
gk(1 + (ka)2 ).
Unlike in the linear approximation, in the Stokes solution the fluid elements don’t move in circles
exactly but move in orbits, whose average motion is slowly in the direction of the waves. This is
due to the fact, that the particles move faster forward (when they are at the top of the trajectory)
than backward (when they are at the bottom of the orbit). This Stokes drift is
uS = a2 ωk exp 2kz0 ,
where z0 is the particle’s initial vertical position.
It turns out, that these waves become instable, if the maximum amplitude exceeds 0.07 times
the wavelength, so that the crest becomes a sharp 120°angle. This is the classical Stokes criterion.
Wave breaking
There is no generally accepted definition of wave breaking. One way to define this phenomenon
would be to demand air entrainment as necessary concomitant to wave breaking [Cokelet, 1977].
The resulting characteristic “whitecaps” are easily detectable. Due to the accompanying bubble
formation, the rate of gas transfer is further increased.
Air entrainment and bubble formation may not be the only criteria for wave breaking; several
researchers have found a characteristic change of the internal flow structure of breaking waves
like break-down of the subsurface flow and the generation of turbulence with enhanced diffusion
and dissipation, possibly accompanied by flow separation. [Banner and Phillips, 1974] define
breaking waves as those, in which certain fluid elements at the free surface (near the wave crest)
are moving forward at a speed greater than the propagation speed of the wave profile as a whole.
18
2.5 Microscale Wave Breaking and Langmuir Circulations
In deep water the fluid elements slide down the leading slope. This results in air entrainment at
large scales of breaking, but not necessarily at smaller scales. Thus the term microscale wave
breaking was created to distinguish this behaviour from the bubbles-producing mechanism. One
characteristic of such a breaking wave is the existence of a stagnation point in the mean flow,
because the mean flow reverses near the crest.
Parasitic capillaries
Moderately short gravity waves exhibit “parasitic capillaries” on the leeward face. These capillary waves have the same phase speed like the gravity waves, which becomes possible due to the
different dispersion relations. [Longuet-Higgins, 1992] conjectures, that the parasitic capillaries
are the source of vorticity, because their orbital frequency is much greater than the orbital frequency of the gravity waves. Thus the mean vorticity, which is a measure for turbulence, depends
both on the wave slope ak of the gravity waves and on the orbital frequency of the capillaries.
The two kinds of waves constitute a symbiotic system, a “capillary roller” - the energy of the
gravity waves is pumped into the capillary waves, which would otherwise be damped away. Together with microscale wave breaking due to wind-induced shear stress, the vorticity production
by parasitic capillaries may be the major reason for the rapid increase of the gas transfer rate,
once waves have formated (i. e. the transition of the Schmidt number exponent from -2/3 to -1/2,
which was reported by [Jähne et al., 1987]).
Small-scale Langmuir circulations
Langmuir circulations are streamwise vortices near the surface accompanied by regions of surface
convergence and divergence (Fig. 2.3). [Melville et al., 1998] presented laboratory measurements
of the generation and evolution of micro-Langmuir circulations as an instability of a wind-driven
surface shear layer using the 2D-PIV-technique. [Veron and Melville, 2001] found that the heat
and gas transfer velocities are increased by a factor of 1.7 to 2 by the small-scale Langmuir circulations using the controlled flux technique. As opposed to micro-breakers, which renew only
a fraction of the surface layer, these circulations rapidly renew the whole surface. Observations
show, that length and time scales associated with the generation of surface waves and microLangmuir circulations are comparable. This suggests a clear coupling of of the Langmuir cells
with the surface waves and the subsequent cross-wind 3D-modulation of the wave field. According to the results of the experiments and field-measurements of [Veron and Melville, 2001] even
the early stages of wind-wave evolution cannot be completely understood without the inclusion
of the small-scale Langmuir circulations and wave-current interactions.
2.5.2 Experiments
Visualisation via bubble-lines and polysterene beads
[Okuda, 1982] used a specially designed hydrogene bubble-line technique to visualise the 2Dvelocity vector field (u(x, z), w(x, z))T in short wind waves. The bubble lines were produced
intermittently by the electrolysis of water. Flow measurement was feasible except in the highly
turbulent flow a little leeward of the crest to a short distance on the windward side. A vorticity
field was calculated via ω = ∂v/∂z − ∂w/∂x. It was found, that the surface vorticity layer is
19
2 Transport of Mass and Momentum at the Air-Water Interface
a
b
c
d
Figure 2.3: Flow visualisation of the surface by seeding with small slightly buoyant glass hollow spheres.
The images are taken at time 21.5 s (a), 22.5 s (b), 23.5 s (c), 24.5 s (d) after turning on the wind. The
waves travel in the direction of the wind from left to right and appear as crosswind (vertical) structures in
the images. At the beginning, the Langmuir circulations appear as streamwise streaks, but soon undergo
bifurcations, so that the flow becomes 3D and turbulent (from [Melville et al., 1998]).
greatly thickened at the crest especially in steep waves. It may happen, that the surface velocity
exceeds the phase speed (“excess flow”), which leads to a downward intrusion of water elements
characterising microscale wave breaking. From the bubble lines it was possible to deduce the
water-side tangential stress, whose magnitude is in a small region at the crest about five times
higher than the mean wind stress. These stress peaks were made responsible for the occurrence
of the excess flow and the development of turbulence.
Using floating polysterene beads of different sizes, whose centers of gravity are located at
different distances from the surface, it was possible to measure the characteristics of the flow just
below the water surface near the crest. These investigations confirm the presence of the excess
flow even there, where no downward intrusion occurs.
[Csanady, 1990] constructs a model of a “roller”, which is based on the experimental results of
[Okuda, 1982]. The roller may be envisaged as a two-dimensional flow structure, whose intense
internal vorticity originates from the viscous boundary layer on the upwind side of a wavelet,
where a shear-stress spike is exerted by the wind. From this model the correct Schmidt-number
20
2.5 Microscale Wave Breaking and Langmuir Circulations
dependence
k = β −1 u∗ Sc−1/2
at a free surface can be derived. Unlike in the case discussed in section 2.4.4, here k+ = k/u∗
exhibits no Reynolds number-dependency, which is approved by [Jähne et al., 1987] experimentally.
PIV measurements
Both, PIV- and PTV-techniques were applied to measurements in a wind-induced wavy shear flow
[Dieter and Jähne, 1994; Hering et al., 1995b]. Here we will restrict ourselves to the experiments
of [Banner and Peirson, 1998; Peirson and Banner, 2003].
[Banner, 1990] concluded from aerodynamic measurements, that the form drag (i. e. the momentum flux from the wind to the waves resulting from normal stress coherent with the wave
slope) and not the tangential stress is the major contribution to the total wind stress.
[Banner and Peirson, 1998] criticise the measurements of [Okuda, 1982] in several ways: Their
method is intrusive due the presence of the hydrogene wire and the characteristics of the bubbles
(size, rise velocity, impact on the water surface).
The surface tangential stress is measured by [Banner and Peirson, 1998] using a 2D-PIV technique, recording particles located in a laser-light-sheet aligned in flow direction. In order to obtain
the shear stress, the difference of the in-stream-velocities of two particles has to be divided by the
vertical particle separation: τtang = µ∆u/∆y. The measured tangential stress levels on the water
side are significantly lower, than those obtained by [Okuda, 1982] and are in correspondence to
the total wind stress measurements and form drag measurements: τtang /τtotal = O(0.3).
[Peirson and Banner, 2003] found using a similar 2D-PIV setup, that the spilling region downward of the crest generally remains locally compact, with a strong convergence of surface fluid
at the toe and relatively weak divergence near the crest and the upwind face of the wave. This
observation is in contrast to the suggestions of [Okuda, 1982] that the viscous sublayer on the
windward face of microscale breaking waves flows over the crest and feeds into the spilling region (see also Fig. 2.4).
Combined PIV and thermographic measurements
Using infrared thermography one can detect turbulent wakes of microscale-breaking waves.
[Zappa et al., 2004] applied the active controlled flux technique (developed by [Jähne et al., 1989]
and [Haussecker et al., 1995]) to measure the local transfer velocities in and outside of these
wakes and found that on average the transfer velocity was enhanced by a factor of 3.5 inside
of the wakes. They also estimated that up to 75% of the transfer across the air-water interface
was contributed by microscale wave breaking. They concluded, that in their laboratory experiments microscale wave breaking was the dominant process controlling the gas transfer rate at
low to moderate wind speeds. They also noted that at low windspeeds apart from the microscale
breaking waves a secondary mechanism may be of importance in determining air-water transfer velocities. A promising candidate is the occurence of small-scale Langmuir-circulations (see
section 2.5.1).
[Siddiqui et al., 2001] used PIV to observe the velocity field beneath the microscale breaking
waves, which were detected by infrared thermography. Because the measurements were relatively
21
2 Transport of Mass and Momentum at the Air-Water Interface
a
Wind
Wind-induced tangential stress generating strong
surface vorticity diffusing to interior
Parasitic capillaries
Envisaged site of separated airflow
reattachment and consequent shear
stress spike driving wind drift
current over crest
Trough sheltered from
wind
12 ∝m resolution
image window
35 ∝m resolution
image window
Trough shel
te
Spilling region
occupies entire
crest
red from wind
Vorticity generated by spilling or
parasitic capillaries dispersing to the
interior
~175 mm
b
Wind
Airflow reattachment
point position (varies),
with no large shear
stress spike
Surface wind-induced drift
velocity ~0.3u*a ± 0.1u*a opposes
rearward transport relative
to the moving wave
Wind-induced tangential stress generates surface
vorticity ~200 s–1 and surface divergence <10 s–1
Parasitic capillaries
Trough shel
te
red from wind
Intense vorticity (>1000 s–1 ) and
surface convergence (100 s–1 )
Figure 2.4: Kinematic behaviour of microscale breaking waves. a As proposed by [Okuda, 1982]: In
a frame of reference moving with the wave, transport is rearward except within the wave crest, where
the surface velocity excesses the phase speed. Strong surface tangential stresses from the point of airflow reattachment induce transport up to the windward face and over the crest and are responsible for
the generation of vorticity to a large fraction. Furthermore the PIV window sizes used for the investigation of [Peirson and Banner, 2003] are indicated. b As proposed by [Banner and Peirson, 1998;
Peirson and Banner, 2003]: The contribution of the tangential stresses is comparably small. The spilling
region is compact with a strong convergence of surface fluid at the toe and weak divergence near the crest
(figure from [Peirson and Banner, 2003]).
large-scale (measurement area about 8 × 6 cm2 at a spatial resolution of about 2 × 2 mm2 ), it was
not possible to resolve the viscous boundary layer, but to quantify the subsurface vorticity and
thus the turbulence produced by the microbreakers. They found that the air-water heat transfer
velocity was correlated with the magnitude of the near-surface vorticity, which was responsible
for enhancing the air-water heat transfer rates. [Siddiqui et al., 2004] quantified the coherent
structures generated beneath laboratory wind waves.
22
3 Fluid Flow Analysis
As this thesis is concerned with developing a novel fluid flow measurement technique, here an
overview is presented over the various approaches which have been introduced and partly are still
in progress of being developed.
In this chapter we will go through the subject in a coarse-to-fine-manner outlined in Fig. 3.1:
We will start with an overview over the relevant quantities and technologies, then we will be
concerned about the measurement of velocity in particular. During the last 20 years image based
techniques have become common in velocity field measurement. Optical-flow based velocity
analysis has become recently of interest for the experimental fluid mechanics community, although it was introduced in the context of computer vision much earlier. Chapter 4 is devoted to
optical flow methods in fluid flow analysis.
flow measurement
pressure
temperature
wall shear stress
velocity
...
velocity measurement
hot wire anenometry
accustic doppler current velocimetry
laser doppler anenometry
image based methods
...
image based methods
optical flow based methods
differential techniques
tensor-based techniques
frequency-based techniques
...
particle image velocimetry
particle tracking velocimetry
optical-flow based methods
...
Figure 3.1: Overview over the task of fluid flow analysis like it is presented in this chapter. Gradually we
will dive deeper into the subject.
23
3 Fluid Flow Analysis
3.1 Measurement Technology for Quantities Related to Flows
There are various possibilities to categorise flow measurement techniques. We can classify them
regarding the way, how they work (e. g. mechanical, electronic, optical, etc. – like this was done
in [Eckelmann, 1997]), or we can sort them according to the measured quantities (e. g. pressure,
wall friction, temperature, velocity, etc. – like in [Nitsche, 1994; Tropea et al., in preparation]).
In this section we will use the first categorisation; in the following we will give an overview over
important quantities related to flows and how to measure them.
Pressure From the pressure distribution around a body, we can compute both, its drag and its
lift, which play an important role in aerodynamics. Moreover, we can get information about
the flow field itself, like mean flow velocities or local wall shear stresses.
According to Bernoulli’s equation the total pressure splits into static pressure, dynamic pressure and hydrostatic pressure:
ρ
ptot = pstatic + u2 + ρgh = const.
(3.1)
2
Directly only the static pressure at the wall is measurable, for example by placing small
boreholes into the wall and connecting them to a pressure transmitter via a hose connection. Examples for pressure transmitters are manometers or electromechanic sensors, like
strain gauges or piezo elements. Because of their compact size, the latter are appropriate for
the measurement of fluctuations of the static pressure (see [Detert et al., 2004]). In order to
measure spatial distributions of surface pressure non-intrusively, pressure (oxygen) sensitive
paints have become available [McLachlan and Bell, 1995].
In order to measure the total pressure, it is common to use pitot tubes (1D-case). The concept
of a pitot-tube can be generalised to three dimensions by using sensors with many bores at
once.
The dynamic pressure can be measured by exploiting Eq. (3.1) and subtracting the static
pressure from the total pressure. For this task there exist apparati, like Prandtl’s pitot tube.
Temperature and heat flux Because most of the flow-properties are temperature-dependent,
often simultaneous measurement of temperature and other quantities is necessary.
Liquid crystals allow a simultaneous measurement of both the temperature and velocity fields
[Hiller et al., 1993]. The visualization of temperature using thermochromic liquid crystals is
based on their temperature-dependent refractivity at the wavelengths of visible light.
Thermography can be used to measure spatially distributed and time resolved temperature
fields and heat fluxes [Garbe et al., 2003].
Especially for the investigation of temperature fields in combustion processes laser methods are appropriate. If the pressure remains constant during a temperature change the absorber density will be modified by the temperature, which can be detected by a spectrometer
[Wolfrum et al., 2000].
Wall shear stress Because the wall friction is responsible for the viscosity-dependent flow
resistance, it might be important to measure this quantity globally. On the other hand, a local
analysis of the wall shear stress gives insight into the boundary layer flow. An overview over
some of the methods is given by [Fernholz et al., 1996].
Primarily in the high speed- (and thus high shear stress-) section, mechanical techniques like
balances are used.
24
3.1 Measurement Technology for Quantities Related to Flows
z
Space
y
x
Ti me
t
DGV - Doppler gl obal velocimet r y
LTV - Laser t r ansi t veloci met r y
GPD - Global phase Doppler
PD - Phase Doppler
LDV - Laser Doppler velocimet r y
PI V - Par t i cle image veloci met r y
LFT - Laser Flow Taggi ng
PTV - Par t i cle t r acki ng velocimet r y
Figure 3.2: Classification of velocity measurement methods according to the number of measured components and dimensionality of the underlying spatial domain1
Thermoelectric methods to measure the wall friction are techniques using hot films, momentum wires and hot wires at the surface. These methods have in common, that the local wall
shear stress is measured by forced convection in a thermoelectric heated wall sensor (hot film)
or using the measured near-wall velocity (momentum wires and hot wires). Because none of
these techniques measures the wall shear stress directly, calibration is an especially important
issue.
Pressure sensors also can be used to measure the wall shear stress. For example a pitot tube
can slighty be modified to measure wall friction. Another technique is using a surface fence,
where the pressure difference up- and downstream of an edge is measured. All of these
sensor-based methods are more or less invasive.
Because Laser Doppler anemometry (see section 3.2.2) is capable to measure velocities very
precisely and high resolved, it can be used to measure the velocity profile near a wall and thus
the wall shear stress. Like the previous methods, LDA is only a point measurement method.
Examples of spatial measurements methods are oil film interferometry [Monson et al., 1993]
and infrared thermography [Mayer et al., 1998].
Velocity Again pressure sensors can be used to measure velocity using Bernoulli’s equation
Eq. (3.1). Predominantly point measurement methods are thermoelectric techniques (hotwire-anemometry, momentum-wire-anemometry), acoustic techniques (acoustic Doppler velocimetry) and optical techniques (laser Doppler anemometry). Spatial methods are the
image-based techniques. In the next section we will have a closer look to some of these
methods.
1
Figure from http://www.sla.tu-darmstadt.de/lehre/smt.ger.htm/lecture-das_experiment_b.pdf (Tropea et al.).
25
3 Fluid Flow Analysis
3.2 Measurement of Velocity
The velocity field is one of the most basic quantities in continuum mechanics. Other kinematical
quantities (like vorticity or strain rate) can be derived by the velocity field and its spatial and
temporal derivatives. The Navier-Stokes-Equation or the continuity equation, are formulated in
terms of velocity as unknown parameter.
This section is concerned about measuring the velocity, which can be interpreted as a spatiotemporal vector field, i. e. at every point in the fluid (which is embedded in three-dimensional
space, and may be instationary (3+1D)) there exists a three-component (3C) velocity-vector:
x, y, z; t → (u(x, y, z; t), v(x, y, z; t), w(x, y, z; t))T
i. e. R3 × R → R3 .
Some methods are restricted to time-resolved velocity measurements at points (0+1D), in a
line (1+1D) or on a plane (2+1D), or they yield only one or two components of the flow vector
(1C/2C). If we deal with stationary flows, i. e. the velocity field does not depend on time, we can
extend any lower-dimensional method to a 3D method by traversing the sensor in space.
In this section we will have a closer look to some common non-image-based velocity measurement methods, like hot wire anemometry, representing thermoelectric techniques, acoustic
Doppler current velocimetry and laser Doppler anemometry, standing for the laser-based optical
methods.
3.2.1 Thermoelectric Techniques - Hot-Wire Anemometry
Hot-wire anemometry permits the temporally high-resolved measurement of both mean velocities
and fluctuations, so that for the first time it was possible, to capture the turbulent behaviour of
flows quantitatively.
Its principle is as follows: A thin metal wire is heated on a constant temperature, which lies
significantly above the temperature of the surrounding flow. The required voltage U is used as a
measure for the flow’s velocity u:
U 2 = A + Bun ,
(3.2)
where A and B are calibration constants and the velocity exponent n can be set approximately
to 0.5. Equation (3.2) serves as a simple calibration rule for hot-wires. Calibration is commonly
performed with a pitot-tube as reference.
The time constant of the temperature adaption of a hot-wire depends on its thickness. Typically
these wires are very thin (some microns), so that fast sampling is achievable (up to 50 kHz). The
precision of hot-wire anemometers is 1% under well-posed conditions. Arrangements of two or
more wires in a sensor allow for measuring two or three components of the velocity and other
quantities like velocity gradients [Honkan and Andreopoulos, 1997].
Apart from the fact, that hot-wire anemometry is a 0D and intrusive measurement method, its
accuracy depends very strongly on correct calibration. Furthermore, this method is suitable to
only a limited extend for measurements in water, because the fine sensor is prone to deposits,
which may disturb the measurements. For further information see [Lomas, 1986] or [Perry,
1982].
26
3.2 Measurement of Velocity
λ
laser beam 1
∆x
θ
laser beam 2
particle
Figure 3.3: Principle of a dual-beam laser Doppler system. A particle (speed vp ) passes through a region
of crossing wave fronts (wave length λ, crossing angle θ), which form a stationary wave (distance of the
wave fronts: ∆x). Light of frequency f is emitted (from [Albrecht et al., 2003]).
3.2.2 Optical Methods - Laser Doppler Anemometry
Laser-Doppler anemometry (or Laser-Doppler velocimetry) is based on measurement of the scattered light at particles, advected with the fluid. A moving particle causes a frequency shift of the
scattered laser beam, according to the Doppler-effect for light. For further reading we refer to
[Durst et al., 1976; Tropea, 1995].
Figure 3.3, left shows the principle of a simple dual-beam laser Doppler system. The light
beam, emitted from the laser is divided into two rays and focused in the measurement area. The
light, which is being scattered by a traversing particle, is projected onto a photo-detector, which
is connected to a signal processing unit.
We can visualise the development of the Doppler signal by a small particle passing through
a region of crossing wave fronts, which form a stationary wave. The particle scatters the light,
which has a frequency f proportional to the velocity vp of the particle and antiproportional to the
distance ∆x of the wave fronts:
f=
vp
∆x
⇒
vp = f ∆x =
fλ
.
2 sin θ/2
By using more than one LDA component, i. e. one or two further beam pairs, which are
arranged orthogonally to the first beam pair, it is possible to measure 2C or 3C velocity vectors.
For the sizing of particles, we have to bear in mind, that the smaller a particle is, the better it
follows the fluid, but the worse light is scattered. Beyond a particle size of about 10 × λ resp.
1 × λ, we are in the Mie- resp. Rayleigh-range, where we have a different scattering behaviour
than in the range of geometrical optics.
3.2.3 Acoustic Methods - ADV and ADCP
Acoustic Doppler velocimetry (ADV) makes use of the acoustic Doppler effect to measure a
three-component velocity vector at a single point in liquids (0D3C). The device sends out a beam
of acoustic waves at a fixed frequency from the transmitter probe. These waves bounce off of
27
3 Fluid Flow Analysis
moving particles in the liquid and three receiving probes detect the change in frequency of the
returned waves. Using this information, the velocity vector of the liquid can be reconstructed
[Woodward and Appell, 1986].
An acoustic Doppler current profiler (ADCP) measures how fast liquid is moving across an
entire liquid column (1D3C). In addition to the ADV it measures the time it takes for the acoustic waves to bounce back, so that it is able to locate the position of the particle along a line
[Voulgaris and Trowbridge, 1998].
3.3 Image Based Methods
Image based methods allow the calculation of 2D2C, 2D3C, 3D2C or 3D3C and/or time-resolved
velocity vector fields. They all have in common, that they try to find the optical flow, which is the
displacement vector field in the image plane, by determining the displacements of image features
in a number of successive frames.
All image based methods have in common, that some kind of tracer is added to the fluid, which
should have two principal qualities: Firstly the tracer’s motion should be representative for the
motion of the fluid. Secondly the optical properties of the tracer have to be in such a way, that
their motion can be detected by the camera and processed by the algorithmics. There are two
kinds of tracer, which are commonly used: Small particles suspended in the fluid and continuous
tracer, like heat or concentration.
The methods presented in this and in the next section differ in the kind of features, which are
used to estimate the optical flow:
Discrete features Here single particles are segmented and tracked. Thus the corresponding
technique is called particle tracking velocimetry (see section 3.3.2).
Patterns of discrete features Using patterns of particles one can reduce the ambiguity, which
one has by tracking single particles. The algorithms generally are based on cross-correlation.
This method is known as particle image velocimetry (see section 3.3.1).
Continuous features Continuous features, like grey values or intensity gradients, can be used
to extract the optical flow. Generally this is done by applying a model like constancy of gray
value, as in the optical-flow based methods (see chapter 4).
We note, that there are further methods like least squares matching, which has its origin in the
field of photogrammetry ([Gruen, 1985]), or that there emerged hybrid methods by combining
established techniques.
3.3.1 Particle Image Velocimetry
Particle imaging velocimetry (PIV) has become a standard tool in experimental fluid dynamics
since the early nineties ([Adrian, 1991]). The books by [Raffel et al., 1998; Westerweel, 1993]
are detailed descriptions of this technique and provide further references to the literature of PIV.
In conventional 2D2C-PIV two subsequent images of the flow, which is provided with small
particles, are recorded using a laser-light section and a high-speed CCD-camera (see Fig. 3.4,
left). The recordings are fractioned in small subareas, called “interrogation areas”, typically of
size 16 × 16 or 32 × 32 pixels2 . The local displacement vector (∆x, ∆y)T from one image to the
28
3.3 Image Based Methods
record 2 (time t+∆t)
record 1 (time t)
2D-cross-correlation function
∆x =0
(∆x,∆y)
∆y=0
Figure 3.4: Left: Typical setup of a 2D2C-PIV-system2 . Right: The images are partitioned into regular
interrogation areas. Via cross correlation a displacement vector (∆x, ∆y)T is estimated.
next is determined by means of computing the 2D-cross-correlation function
R(x, y) =
K
L
X
X
g1 (i, j)g2 (i + x, j + y),
(3.3)
i=−K j=−L
(x, y)T ,
for all possible
and afterwards finding its maximum, where (xR→max , yR→max )T ≡
(∆x, ∆y)T . Here the interrogation windows are of size (2K + 1) × (2L + 1) pixels2 . Because Eq. (3.3) is a convolution of the interrogation windows g1 and g2 , we can use Fourier’s
theorem to reduce the expression to a pointwise multiplication of its Fourier-transforms ĝ1 and
ĝ2 :
R(x, y) = g1 ∗ g2 c s R̂(x, y) = ĝ1 · ĝ2 .
Unfortunately Eq. (3.3) will yield different maximum correlation values for the same degree of
matching, because the function is not normalised: Samples with brighter particle images will produce higher correlation peaks than windows with weaker particle images. Thus a normalization
procedure is applied as follows:
PM PN
i=0
j=0 [g1 (i, j) − µ1 ][g2 (i + x, j + y) − µ2 (x, y)]
r(x, y) = qP P
.
(3.4)
PM PN
M
N
2
2
[g
(i,
j)
−
µ
]
[g
(i,
j)
−
µ
(x,
y)]
1
2
i=0
j=0 1
i=0
j=0 2
Here µ1 is the average of the template and is computed only once, while µ2 (x, y) is the average
of g2 coincident to the template g1 at position (x, y) and has to be computed at every position
(x, y). Equation (3.4) cannot be solved in frequency domain easily, so that nowadays normalised
cross-correlation is done in spatial domain rather than ordinary cross-correlation is performed in
frequency-domain, though it is computational more expensive ([Raffel et al., 1998]).
Problems of PIV
The simple cross correlation scheme, naively implemented, leads to some difficulties, which one
tries to overcome:
29
3 Fluid Flow Analysis
Spurious vectors Due to the statistical nature of PIV, there may be “false” maxima in the
cross correlation function. Using a confidence measure, like the ratio of the tallest to the
second tallest correlation peak, one can detect these outliers and remove them.
Velocity gradients Using cross-correlation for motion analysis we assume, that all particles
within one interrogation area have moved homogeneously from one frame to the other. If
velocity gradients are present in the flow, this is obviously not the case. To overcome this
problem iterative window-deformation methods have been developed. In each iteration the
image area within the interrogation window is deformed according to the displacement field
calculated in the previous iteration [Huang et al., 1993a,b].
Out-of-plane loss-of-pairs Applying a 2D-laser laser light section to a 3D flow, particles
may enter or leave the light sheet. This is a principle problem of 2D-PIV and can only be
overcome by thickening the light sheet or using a 3D-technique.
In-plane loss-of-pairs Due to the fractioning in interrogation windows, there arise imaged
particles, which are missing in one of the recordings. By applying a window shifting technique, we are able to reduce the resulting bias [Westerweel et al., 1997].
Peak locking When the imaged particles become too small, the displacements tend to be biased toward integer values. Window shifting helps to reduce the effect of “peak locking”
[Gui and Wereley, 2002].
We have summarised some “design rules” in table 3.1.
PIV beyond 2D2C
There have been proposed many approaches to overcome the 2D2C restriction of the upper
scheme. In the following we will discuss the most important of them:
Time resolved PIV In TR-PIV the analysis does not restrict itself on evaluating an image pair,
but by continuously recording images at a high frame rate, evaluation of instationary processes becomes possible. Moreover, by using temporal information a higher accuracy can be
achieved [Hain and Kähler, 2004].
Stereoscopic PIV Here two cameras record the same area in a single laser light section, so that
for each interrogation windows two 2C vectors can be calculated. Using this information, we
can calculate all three components of the velocity vector, if we have acquired knowledge
about the system geometry by calibration.
Multiplane stereoscopic PIV Additionally to the 3C vectors, by subsequently scanning the
target volume in different depths, a voluminous recording can be achieved
[Kähler and Kompenhans, 2000].
Photogrammetric PIV A voluminously illuminated region is recorded by at least three cameras from different directions. This allows for reconstructing the particles’ 3D positions. Velocity estimation is performed by cross-correlation of “interrogation volumes” [Schimpf et al.,
2003].
Holographic PIV Coherent light stemming from a laser illuminates a 3D target volume. The
holograms are recorded on photographic film, which can be read out with coherent light
beams [Hinsch, 2002]. Due to its high demands of hardware, the application of holographic
30
3.3 Image Based Methods
Table 3.1: Design rules for 2D2C-PIV2 .
Particle diameter
Particle density
Maximum displacement in-plane
Maximum displacement out-of-plane
2 to 3 pixel
greater than 6 per interrogation window
1/4 of the width of an interrogation window
1/4 of thickness of the laser light sheet
PIV is limited to relatively simple flow configurations, so far.
PIV with color-coded light-sheets By using two or more color-coded light-sheets [Brücker,
1996; Ruck, 2003], we can illuminate the whole 3D field at once instead of subsequently
scanning the volume. This has the advantage, that a higher frame rate can be achieved. Disadvantages are artifacts, which occur at overlappings of particles and wave-length dependent
scatter behaviour of particles (Mie-effects). The latter can be avoided by using a non-coherent
light-source.
3.3.2 Particle Tracking Velocimetry
In contrast to PIV, particle tracking velocimetry (PTV) does not establish correlations between
patterns, but tries to find correspondences between segmented particles. Classical PTV consists
of following three tasks:
Segmentation of the particles Generally the particles have to be separated from the background and from each other. The segmentation procedure is addressed in section 7.2 in more
detail.
Feature extraction Relevant features are the particle position, which can be estimated by calculating the weighted mean of the coordinates of a segmented particle (“center of mass”), the
particle’s maximum or mean gray value, the particle’s velocity, which can be calculated using
the method of optical flow for instance, or shape descriptors like higher moments.
Temporal correspondence analysis Goal is, to establish a unique correspondence between
a particle in the first frame and one and the same particle in the next frame.
This can be done using the features of the particles. The simplest thinkable approach is
the “nearest neighbour search”. [Hering, 1996; Perkins and Hunt, 1989] use the feature of
position only, but by incorporating other characteristics it is possible to limit the search range
for the corresponding particle [Klar, 2005].
Many algorithms make use of the history of the particle. Some of them implicitly assume
smoothness of motion and make use of statistics like the Kalman filter [Welch and Bishop,
1995] or its generalization, the particle filter [Ristic et al., 2004].
A method based on global optimization is variational PTV, proposed by [Ruhnau et al., 2005a].
Here a data term, which incorporates the local information of the nearest-neighbour-solution,
is supplemented by a smoothness term, similar to differential global optical flow techniques
in section 4.2.1.
2
Figure and rules from http://www.sla.tu-darmstadt.de/lehre/smt.ger.htm/lecture-piv.pdf (Tropea et al.)
31
3 Fluid Flow Analysis
Three dimensional PTV
Most of the 3D-PTV approaches work photogrammetrically, i. e. a flow volume is recorded by
two, three or more cameras from different viewpoints. One exception is the method proposed
by [Willert and Gharib, 1992], where the third spatial dimension is estimated by applying depth
from focus in a voluminously illuminated region.
In three dimensions, the problem of correspondence analysis can be tackled in two ways:
• The stereo-correspondence problem (i. e. to identify one and the same particle in the recordings of the various cameras) can be solved making use of the epipolar geometry. Because of
the similarity of the imaged particles, classical stereo won’t work, so that at least three cameras are needed. Once the 3D-coordinates of the particles are extracted, temporal correspondence analysis is performed using the techniques from above, generalised to 3D [Maas et al.,
1993; Malik et al., 1993].
• Standard 2D-PTV is performed to obtain trajectories for each of the different views. In a
second step, these trajectories are matched using epipolar geometry. Here two cameras are
sufficient, because generally the trajectories contain enough “structure”, so that they can be
matched unambiguously [Engelmann, 2000; Klar, 2005].
3.3.3 PIV versus PTV
PIV and PTV are two fundamental different approaches. Several different algorithms, both from
the PIV and from the PTV type, have been compared to each other in three international “challenges”, so far3 [Stanislas et al., 2003, 2005]. [McKenna and McGillis, 2002] compared PTV and
various PIV techniques regarding efficiency and accuracy.
One of the results was, that PTV is clearly complementary to PIV in several aspects:
• To achieve good precision, PIV needs a certain minimum seeding density (see table 3.1). This
fact is in contrast to the requirements of PTV, where the probability for mismatches increases
with the seeding density.
• PTV yields Lagrangian trajectories, whereas PIV provides an Eulerian vector field. It depends
on the application, which output is better suited. Moreover, PTV can deploy its power, when
whole image-sequences are present; most PIV-algorithms are designed for image-pairs.
• Generally PTV yields a higher resolution, because a vector is assigned to each particle. Unfortunately, data lies on an irregularly spaced grid, so that interpolating the vector field to a
regular grid this advantage is attenuated to a certain extent.
• PTV has clear advantages at locations, where high velocity-gradients are present (for examples at boundary layers). Simple PIV relies on the assumption of a homogeneous displacement of the particles inside the interrogation-window from one image to the next. Thus it cannot resolve these kinds of flows accurately. Sophisticated PIV-algorithms (e. g. containing iterative window-deformation) alleviate but cannot negate this point of criticism [Stanislas et al.,
2005].
Attempts have been made to combine the advantages of PIV with the advantages of PTV
3
The results of the third Int. PIV Challenge in Pasadena, USA, haven’t been published, so far (2006). For further
information, see http://www.pivchallenge.org
32
3.3 Image Based Methods
[Bastiaans et al., 2002; Cowen and Monismith, 1997; Keane et al., 1995]. These hybrid methods
use PIV as an initialization step for a subsequent PTV analysis. The principle procedure of
“superresolution PIV” is similar to our approach in section 7.5.
33
3 Fluid Flow Analysis
34
4 Optical-Flow Methods in Fluid Flow
Analysis
4.1 Introduction
Optical flow can be considered as the distribution of apparent velocities of movement of brightness patterns in an image [Horn and Schunk, 1981]. More precisely the optical flow is an approximation to the two-dimensional motion field of an image sequence. We obtain this motion field
by projecting the three-dimensional velocities of object points in three-dimensional space onto
the two-dimensional image-plane [Barron et al., 1994].
In estimating the optical flow and in determining motion fields from the estimated optical flow,
there arise a number of peculiarities and difficulties, which have to be taken into account:
• There may be regions in the images, where no motion can be determined (“black wall problem”) or where only the normal velocity component can be determined (“aperture problem”).
Figure 4.1 illustrates these situations. In order to obtain a dense flow field an interpolationor a regularization-technique has to be applied. This is an intrinsic problem in optical-flow
computation, and it is addressed throughout the chapter.
• Brightness changes have to be taken into account. Illumination changes lead to a misinterpretation of the optical flow field. This can be nicely demonstrated by considering a rigid
sphere with homogeneous surface reflectance, spinning around an axis through the center of
the sphere (Fig. 4.2). If the surface is not textured and the illumination stays constant, the
optical flow field would be zero over the entire sphere. If a directional light source moves
around the same sphere the illumination changes would be falsely attributed to motion of the
sphere surface. In many situations we can apply physical models of brightness variations to
deal with brightness changes. We address this problem in section 4.3.4.
• There may be regions in the image, where the motion is non-translational. Especially for
applications in fluid dynamics the non-rigid behaviour of fluids has to be taken into account.
For these situations appropriate models have been developed, explained in section 4.3.1.
• In the presence of motion discontinuities, reflexes or corrupted pixels, parametrised flow field
models fail. Using a robust approach described in section 4.3.3 we can determine a correct
motion field even in these situations.
• For large displacements the temporal sampling theorem may be violated. Coarse-to-fine techniques applied on bandpass-filtered image-sequences help to cope with this problem. A simple coarse-to-fine algorithm is presented in section 4.3.2.
Optical flow methods can be classified as belonging to one of these groups:
Differential techniques These methods compute image velocity from spatio-temporal intensity derivatives.
35
4 Optical-Flow Methods in Fluid Flow Analysis
v
Vy
?
Vy
Vy
V
x
V
x
V
x
Figure 4.1: Black wall and aperture problem. At the corners of the moving square, a unique velocity can
be estimated. On the edges of the squares only the normal component of the velocity is determinable. In
the center the motion is completely unconstrained (from [Simoncelli, 1993]).
Frequency-based techniques These methods use energy/phase information in the output of
velocity tuned filters.
Tensor-based techniques The local image brightness distribution can be represented as a
structure-tensor, from which we are able to deduce the motion field.
Though the attempts to estimate optical flow look very different at first glance, all of these
approaches are closely related. [Simoncelli, 1993] showed, that differential techniques are equivalent to frequency-based techniques, provided that the derivatives and filters are chosen appropriately. The structure-tensor can be constructed based on operations in the spatio-temporal domain
[Jähne, 2002], but it can also be obtained by linear combinations of outputs of filters in frequencydomain [Granlund and H., 1995]. [Jähne et al., 1999] and [Barron et al., 1994] provide a wellfounded overview on the field of optical flow estimation, and give a quantitative comparison of
the results. Though the method of optical flow is quite common in computer vision, the extent of
application in experimental fluid dynamics is relatively small, so far.
In the following we will provide a review of the optical flow methods classified above, and
we will show, how to cope with the difficulties listed above. We conclude this chapter with a
literature review.
4.2 Optical Flow Methods
4.2.1 Differential Techniques
The general task is to determine the optical flow field f = (f1 , f2 )T = (dx/dt, dy/dt)T from
the gray values of an image sequence. In the simplest case it is assumed, that the gray value
g(x, t) along a path x(t) remains constant for all times:
g(x(t), t) = c.
By taking the temporal derivative on both sides and applying the chain-rule we get:
dg
∂g dx ∂g dy ∂g
=
+
+
= 0.
dt
∂x dt
∂y dt
∂t
36
(4.1)
4.2 Optical Flow Methods
3-D scene
optical flow field
optical flow field
3-D scene
Figure 4.2: Illumination changes and optical flow. Left: Spinning sphere with fixed illumination leads to
zero optical flow. Right: Moving illumination source causes apparent optical flow field without motion of
the sphere (from [Jähne et al., 1999]).
Writing down in vector notation using ∇g = (∂g/∂x, ∂g/∂y)T this yields the brightness
constancy constraint equation (BCCE):
(∇g)T f + gt = 0.
(4.2)
Because the partial derivatives gx = ∂g/∂x, gy = ∂g/∂y and gt = ∂g/∂t are accessible by
application of a derivative filter, we obtain one constraint for the two-component flow-field. Dealing with two unknowns in one equation we have an ill-posed problem. Graphically spoken the
solution of Eq. (4.2) determines a line, containing all vectors, which are possible candidates for
the true optical flow vector (Fig. 4.3). Without further assumptions only the flow perpendicular to
the constraint line can be estimated. This problem is commonly referred as the aperture problem
of motion estimation.
Solving Eq. (4.2) for points xU in a sufficiently large neighbourhood around x we may get
other constraint lines, so that we can determine the true optical flow vector by the intersecting
point of the constraint lines. But we must not choose the neighbourhood too large, because we
can’t guarantee, that the motion is constant in a larger area. How large to choose the neighbourhood is referred as the generalised aperture problem. One way to weaken the assumption
of constant motion is finding a local parametrization of the flow field, so that we demand local
coherency instead of local constancy. This leads us to the second constraint of differential optical
flow estimation, the spatial coherence constraint.
The concept of optical flow originally comes from hydrodynamics. Gray values “flow” over the
image plane, like volume elements flow in fluids. In hydrodynamics the principle of conservation
of mass is formulated by the continuity-equation, which reads in its differential form (compare
Eq. (2.5)):
∂ρ
∂ρ
+ ∇(uρ) =
+ u∇ρ + ρ∇u = 0.
(4.3)
∂t
∂t
The three-dimensional velocity u of a fluid element with density ρ in three-dimensional space
is apparently analogous to the two-dimensional optical flow f of a gray value g in two-dimensional
space. The BCCE Eq. (4.2) corresponds to the continuity-equation Eq. (4.3), if we drop ρ∇u.
Why do we have to drop the last term? Consider an object moving away from the camera. In
37
4 Optical-Flow Methods in Fluid Flow Analysis
f2
f
co
ns
f^
tra
int
f^
line
f1
Figure 4.3: Illustration of the aperture problem: constraint line defined by Eq. (4.2). The normal flow
vector f ⊥ is pointing perpendicular to the line (from [Jähne et al., 1999]).
this case the total brightness change dg/dx of a gray value on a path x(t) is zero, because the
irradiance in the image plane remains the same for an object moving perpendicular to the image
plane. Because both gt and ∇g are zero, we can apply Eq. (4.2). But we cannot apply Eq. (4.3),
because the additional term ρ∇u resp. g∇f is not zero, because the motion is not divergence
free.
However, under certain conditions the use of a two-dimensional continuity equation instead of
the BCCE can be motivated. This is the case, if we have to deal with 2D transmittance images
of a 3D fluid flow, so that the 2D (imaged) flow is the density weighted average of the 3D flow
[Corpetti et al., 2002; Wildes et al., 2000]. The constraint on the data becomes now
gx f1 + gy f2 + gf1 + gf2 + gt = 0.
(4.4)
Local weighted least squares
Assuming the optical flow to be constant within a small neighbourhood, [Lucas and Kanade,
1981] proposed a local method to estimate the optical flow. Goal is to minimise the squared
left-hand side of the BCCE Eq. (4.2) in a local neighbourhood U around x, which is given by the
weighting- (or window-) function w(x − x′ ):
Z ∞
¤2
£
(4.5)
w(x − x′ ) (∇g)T f + gt dx′ .
fˆ = arg min
f
−∞
The weighting function is given in the simplest case by a box-filter (all points in the neighbourhood are weighted equally), but better results can be achieved using a binomial filter. Standard
least squares minimization (setting the partial derivatives of the functional with respect to f1 and
f2 to zero) yields the equation system
"
#"
#
"
#
hgx gx i hgx gy i
f1
hgx gt i
=−
(4.6)
hgx gy i hgy gy i
f2
hgy gt i
|
{z
} | {z } |
{z
}
A
with the abbreviation
hai =
38
f
Z
∞
−∞
w(x − x′ )adx′ .
b
4.2 Optical Flow Methods
The solution of Eq. (4.6) is given by
f = A−1 b,
provided the inverse of A exists, i. e. the determinant of A is unequal to zero:
det A = hgx gx i hgy gy i − hgx gy i2 6= 0.
This is not the case, if all spatial derivatives in the local neighbourhood are zero (“black wall
problem”), or if all gradients in the local neighbourhood point into the same direction (“aperture
problem”).
Global constraints
Instead of assuming local spatial constancy (and therefore coherence) by introducing a windowfunction we can demand global spatial coherence. One can determine the optical flow by minimizing the BCCE Eq. (4.2) over the entire image Ω. To °make
° the problem well-posed an addi2
°
tional term, the regularizing spatial coherence constraint eS °, is introduced:
Z
° °
¤2
£
(4.7)
fˆ = arg min
(∇g)T f + gt dx′ + λ2 °e2S ° .
f
Ω
The parameter λ controls the influence of the spatial coherence term. [Horn and Schunk, 1981]
propose global smoothness for the spatial coherence constraint:
Z
° 2°
°
°
°
°
°eS ° =
°∇f1 (x′ )°2 + °∇f2 (x′ )°2 dx′
(4.8)
Ω
° °
There are other suggestions for °e2S °, which may be better suited to special kinds of problems
(e. g. fluid flow analysis). To solve the minimization problem we use a variational approach.
Thus, the integral equation Eq. (4.7) can be solved by a system of Euler-Lagrange-Equations:
∂
∂
Lf1x −
Lf
∂x
∂y 1y
∂
∂
−
Lf2x −
Lf
∂x
∂y 2y
Lf1 −
= 0
Lf2
= 0
The integrand of Eq. (4.7) can be identified with the Lagrange-function:
2
2
2
2
L(f1 , f2 , f1x , f1y , f2x , f2y ) = (gx f1 + gy f2 + gt )2 + λ2 (f1x
+ f1y
+ f2x
+ f2y
)
L plugged into the Euler-Lagrange-Equations yields
((∇g)T f + gt )gx − λ2 ((f1 )xx + (f1 )yy )) =
T
0
2
((∇g) f + gt )gy − λ ((f2 )xx + (f2 )yy )) = 0,
which can be combined to one vector-equation:
((∇g)T f + gt )∇g − λ2 ∇2 f = 0.
(4.9)
39
4 Optical-Flow Methods in Fluid Flow Analysis
For the case, that we have a high spatial gray value-variation (that means ∇g is large), the first
summand dominates in the equation, and the optical flow is calculated using the BCCE. But if
we have the case, that we are suffering from a “black wall problem” the optical flow is calculated
from the last summand, which states the Laplacian equation ∇2 f = 0.
The discretisation can be performed using finite differences or finite elements. Once guaranteed, that the problem is well-posed, there exist a number of minimisation schemes like GaussJordan elimination or Gauss-Seidel iteration, which can be applied.
4.2.2 Frequency-Based Techniques
The concept of image sequences as spatio-temporal images allows one to analyse motion in the
corresponding spatio-temporal frequency domain (Fourier domain). Let g(x, t) be an image
sequence of any pattern moving with constant velocity, causing the optical flow f at any point in
the image plane, the resulting spatio-temporal structure can be described by
g(x, t) = g(x − f t).
(4.10)
The spatio-temporal Fourier transform ĝ(k, ω) of equation Eq. (4.10) is given by
ĝ(k, ω) = ĝ(k)δ(kT − ω),
(4.11)
where ĝ(k) is the spatial Fourier transform of the pattern, and δ(·) denotes Dirac’s delta distribution. This equation states, that the three-dimensional Fourier spectrum of a pattern moving
with constant velocity condenses to a plane in Fourier space. The plane-equation in Fourier domain is given by the argument of the delta-distribution in Eq. (4.11) and can be considered as an
alternative formulation of the BCCE Eq. (4.2):
ω(k, f ) = kT f .
(4.12)
Taking the derivatives of ω(k, f ) with respect to kx and ky yields both components of the
optical flow:
f = ∇k ω(k, f ).
Quadrature-filter-techniques try to estimate the orientation of this plane by using velocity tuned
filters in the Fourier-domain. A quadrature-filter-pair is a real frequency selective filter together
with its imaginary Hilbert-transform. Its transfer function can be written down in complex notation q̂(k).
The most common quadrature-filter-pair is the Gabor filter, which selects a certain spatiotemporal frequency region with a Gaussian window centered at (k0 , ω0 ). Its complex transfer
function is
³ p
´
Ĝ(k, ω) = exp − (k − k0 )2 + (ω − ω0 )2 σ 2 /2 .
(4.13)
From this the spatio-temporal filter mask can be computed using the shift-theorem:
· µ 2
¶¸
x + y 2 + t2
1
G(x, t) =
exp[i(k0 x + ω0 t)] exp −
.
2σ 2
(2π)3/2 σ 3
(4.14)
By applying this filter for different parameter sets (k, ω) on the original spatio-temporal image
we get estimates of the spectral density (or energy) of the corresponding periodic image structure
40
4.2 Optical Flow Methods
Figure 4.4: Left: A simple neighbourhood in 2D. The grey values depend on one coordinate in direction
of the unit vector n̂ only. Space-time diagram with one spatial component (center), with two spatial
components respectively (right) (from [Jähne, 2002]).
belonging to these parameter sets. Ideally, for a single translational motion, the responses of these
filters are concentrated about a plane in (k, ω)-space, so that we are able to get the optical flow
by a least squares fit to the data. Another way to calculate the optical flow is by constructing a
structure tensor composed of the filter outputs ([Larsen, 1998]).
4.2.3 Tensor-Based Techniques
Optical flow estimation can be formulated as orientation analysis in a three-dimensional spatiotemporal image. The concept of orientation analysis of a pattern in 2D (Fig. 4.4, left and center)
can be generalised to three dimensions (Fig. 4.4, right). Any moving grey value structure causes
inclined patterns. Goal of tensor-based optical flow estimation is to find the orientation of these
patterns, provided that there exist any oriented patterns.
Let r = (r1 , r2 , r3 )T be the vector pointing into the direction of constant brightness within the
three-dimensional xt-domain. Once estimated r we get for the optical flow:
f = (f1 , f2 )T =
1
(r1 , r2 )T
r3
(4.15)
r is pointing orthogonal to the spatio-temporal gradient vector ∇xt g = (gx , gy , gt )T . Therefore the scalar product between r and ∇xt has to vanish:
¤
£
(gx , gy , gt ) · (r1 , r2 , r3 )T = gx r3 f1 + gy r3 f2 + gt r3 = r3 (∇g)T f + gt = 0
We arrive at the well-known BCCE Eq. (4.2). Instead of the approach of [Lucas and Kanade,
1981], where we minimised the BCCE in a spatial neighbourhood here we minimise ∇xt g · r in a
spatio-temporal neighbourhood U , which is characterised by the window-function w(x − x′ , t −
t′ ).
Z ∞
£
¤2
w(x − x′ , t − t′ ) ∇xt g(x′ , t′ ) · r dx′ dt′ .
(4.16)
r̂ = arg min
r
−∞
41
4 Optical-Flow Methods in Fluid Flow Analysis
Under the assumption of constant r (that is, constant f ) within U , we can reformulate the
minimisation problem to
D
E
£ ­
® ¤
r̂ = arg min [∇xt g · r]2 = arg min r T ∇xt g · ∇xt g T r = arg min r T J r,
(4.17)
r
r
r
using the abbreviation
hai =
Z
∞
−∞
w(x − x′ , t − t′ ) a dx′ dt′ .
(4.18)
J with its components Jpq = hgp gq i is the three-dimensional symmetric structure tensor where
gp , p ∈ {x, y, t}, denotes the partial derivative along the coordinate p.
The structure tensor can be transformed into diagonal shape by means of rotation. Thus the
principal axes of the structure tensor can be found by solving the Eigenvalue problem
J r = λr.
(4.19)
The Eigenvector to the corresponding minimal Eigenvalue denotes the direction of constant
brightness in the xt-domain, from which the optical flow can be calculated according to Eq. (4.15).
From the rank of the structure tensor we can deduce the type of motion: Constant brightness resp.
“black wall problem” (rank(J ) = 0), spatial orientation and constant motion resp. “aperture
problem” (rank(J ) = 1), distributed spatial structures and constant motion (rank(J ) = 2), distributed spatial structures but no coherent motion (rank(J ) = 3). The structure tensor technique
is not only able to give an estimate for the optical flow, but is also able to present a confidence
measure which states something about the quality of the estimate. For the discussion in table 4.1,
we order the Eigenvalues according to λ1 ≥ λ2 ≥ λ3 ≥ 0. The corresponding Eigenvectors have
the form of r i = (rix , riy , rit )T .
A detailed analysis of the structure tensor technique and its practical application to optical
flow computation can be found in [Jähne et al., 1999]. The structure tensor technique can be
formulated as a solution of the total least squares (TLS) problem in a more general way (see
appendix A).
Implementation of the structure tensor
The expression for the structure tensor in Eq. (4.17) is written explicitly:


hgx gx i hgx gy i hgx gt i


J =  hgx gy i hgy gy i hgy gt i  ,
hgx gt i hgy gt i hgt gt i
using the abbreviation Eq. (4.18).
The implementation of the structure tensor can be carried out very efficiently by standard image
processing operators. Identifying the convolution (Eq. (4.8)) with a smoothing operation (for
example the isotropic Binomial operator B), and the derivatives in the pth resp. qth direction with
edge detectors (for example the optimised Sobels Dp and Dq [Scharr, 2000]), we can construct a
structure tensor operator:
Jpq = B(Dp · Dq ),
42
4.3 Improvements of Optical Flow Determination
Table 4.1: Classification of the Eigenvalues of the structure-tensor in a spatio-temporal neighbourhood
condition
λ1 = λ2 = λ3 = 0
rank(J )
0
λ1 > 0, λ2 = λ3 = 0
1
λ1 , λ2 > 0, λ3 = 0
2
λ 1 , λ2 , λ3 > 0
3
comment and calculation of interesting quantities
There is no variation of the grey values. We are in a constant
neighbourhood, where we are not able to reconstruct any kind of
optical flow.
The grey values vary in one direction, which is characterised by
the only non-vanishing Eigenvector. Because we suffer from the
“aperture problem”, we can calculate only the normal optical
2
2
flow: f ⊥ = −r1t /(r1x
+ r1y
)(r1x , r1y )T .
The grey values vary in two directions, and are constant in a
third direction, which is characterised by the Eigenvector belonging to the zero Eigenvalue. Thus the optical flow is here:
f = 1/r3t (r3x , r3y )T .
The grey values vary in all directions. The optical flow cannot
be calculated with this simple model.
where · signalises a pixel-wise multiplication.
Because smoothing in general results in a loss of information, the result after applying the
structure tensor operator can be stored in a more compact sequence than the original data. In
practice this can be handled by downsampling the resulting sequence by a factor of two, for
instance. In order to find a procedure to perform the principal axis transformations efficiently,
we bear in mind, that we deal with very small, symmetric, matrices. These types of data can
be covered by using the numerical method of Jacobi transformations to find the Eigenvalues and
Eigenvectors [Press et al., 1992].
4.3 Improvements of Optical Flow Determination
In this section improvements of determining optical flow will be presented. These improvements
will help to solve the difficulties mentioned in the introduction. Each improvement can be applied
to more than one (but not necessarily to all) methods of determining optical flow.
4.3.1 Parameterisation of 2D-Optical Flow fields
As mentioned in section 4.1 the standard local optical flow methods assume, that the optical
flow f (x, t) is constant within the local neighbourhood U around x. The optical flow may be
expanded to a first order Taylor series in the vicinity of (x0 , t0 ) [Farnebäck, 2000]:
Ã
!
Ã
!
∂u/∂x ∂u/∂y
∂u/∂t
f (x, t) ≈ f (x0 , t0 ) +
x+
t ≡ t + Ax + at .
∂v/∂x ∂v/∂y
∂v/∂t
The BCCE supplemented by this parameterisation yields the extended brightness change constraint equation (EBCCE):
(∇g)T (t + Ax + at) + gt = 0 .
(4.20)
Geometric transformations of the local neighbourhood may be computed from the components
of the matrix A = (aij ), like
43
4 Optical-Flow Methods in Fluid Flow Analysis
Figure 4.5: Elementary affine transformations of a rectangular surface element. From left to right:
rotation, dilation, shear, stretching (from [Jähne et al., 1999]).
• rotation: rot(f ) = ∂u/∂y − ∂v/∂x = a21 − a12 ,
• dilation: div(f ) = ∂u/∂x + ∂v/∂y = a11 + a22 ,
• shear: sh(f ) = ∂u/∂y + ∂v/∂x = a12 + a21 ,
• stretching: str(f ) = ∂u/∂x − ∂v/∂y = a11 − a22 .
For a graphical illustration of elementary affine transformations see Fig. 4.5. For a more general
treatment of parametrization of 2D-optical flow fields in the context of the Lie group of continuous transformations, see appendix B.1.
4.3.2 Coarse-to-Fine Techniques
The temporal sampling theorem states a theoretical upper limit for the magnitude of displacements, which are able to be analysed. Consider a moving sinusoidal pattern of wavelength λ.
From Fig. 4.6 it is evident, that only displacements up to the magnitude of λ/2 are unambiguously determinable. In this case the optical flow can be estimated by the minimal motion, indicated by the solid arrow in Fig. 4.6. Analogue to the spatial sampling theorem ∆x < λ/2 we can
formulate a temporal sampling theorem:
∆t <
T
π
π
= = T .
2
ω
k f
(4.21)
ω can be expressed in terms of wave-number k and optical flow f via the plane equation
Eq. (4.12). Apparently the maximum determinable displacements are limited by the magnitude
of the highest spatial wave-numbers, which are contained in the image.
Coarse-to-fine-techniques help to estimate large motions. By smoothing the image we eliminate the high frequency content of the images, so that we are able to estimate a coarse motion
field. Then we “undo” the motion by transforming the image back by means of the estimated
coarse motion field. Now we can use the higher frequency content to estimate a finer motion
field. Added to the previously coarse motion field the fine motion field provides a more accurate
approximation to the real motion field.
This procedure can be improved in several ways:
If an estimation on a coarse level is incorrect, the fine-level estimate has no chance of correcting the errors. To fix this, we must have knowledge of the error in the coarse-level estimates.
That suggests working in a probabilistic framework. Indeed, a “state evolution equation” and a
“measurement equation” can be proposed similar to Kalman-filtering [Simoncelli, 1993].
44
4.3 Improvements of Optical Flow Determination
λ
Figure 4.6: Illustration of the temporal sampling theorem for a sinusoidal pattern of wavelength λ (from
[Jähne et al., 1999]).
For more accurate computation additional filters can be introduced, which slice the bandwidth
in smaller pieces than these given by the dyadic pyramid structure. This results in a combined
multi-resolution and multi-scale approach [Ruhnau et al., 2005b].
The motion can be distributed very irregularly over the image plane. In these situations a
selective multi-resolution approach is suggested [Cohen and Herlin, 1999].
4.3.3 Robust Estimation
There are situations, when parametrised flow field models fail to determine the optical flow correctly: These include the presence of multiple motions, like motion discontinuitities at boundaries
(occluded multiple motions) or different motions being overlaid (transparent multiple motions)
but also the presence of reflexes or corrupted pixels.
Least squares estimation tries to minimise a quadratic objective function ρ(x) (compare
Eq. (4.5)):
Z ∞
¤
£
ˆ
(4.22)
w(x − x′ )ρ (∇g)T f + gt dx′ . with ρ(x) = x2
f = arg min
f
−∞
The influence function ψ(x) of the objective function is defined as the derivative of ρ with
respect to x:
∂ρ(x)
ψ(x) =
(4.23)
∂x
In the least squares case the influence of data points increases linearly and without bound,
so that outliers, which do not fit to the model, like corrupted pixels, have a great influence and
distort the estimation of the correct optical flow dramatically. This is due to the fact, that by using
a quadratic objective function we inherently assumed, that the residual errors are Gaussian and
independently distributed within the neighbourhood U .
To achieve a more robust parameter estimation we have to replace the quadratic objective function by a suitable other function, which is referred to as M-estimator in statistics. The influence
function of an M-estimator has to be redescending, i. e. it has to approach zero for large residuals
after an initial increase for small values. [Geman and McClure, 1984] proposed a commonly used
M-estimator (Fig. 4.7), which reads together with its influence function
ρ(x, σ) =
x2
,
σ + x2
ψ(x, σ) =
2xσ
,
(σ + x2 )2
(4.24)
where σ is a scale parameter.
45
4 Optical-Flow Methods in Fluid Flow Analysis
Figure 4.7: An example for an M-estimator: a Geman and McClure-norm. b Its derivative (from
[Jähne et al., 1999]).
Given a robust formulation, there are numerous optimization techniques that can be employed
to recover the motion estimates and the most appropriate technique will depend on the particular
formulation and choice of the ρ-function. For detailed information about robust estimation the
reader is referred to [Black and P., 1996].
4.3.4 Dealing with Brightness Changes
In many situations the constraint of brightness constancy Eq. (4.2) is violated. In some cases we
are able to find a physical model for the time-dependent brightness variation. So we can estimate
both the correct optical flow field f and the parameters a of the underlying physical process.
The approach of [Haussecker and Fleet, 2001] constitutes an extension of the brightness change
constraint equation to parameterised models of brightness variation, provided that these models
are linear in a or can be linearised by a Taylor series expansion.
In section 4.2.1 we have stated that for the case that the brightness g(x, t) is constant along a
path x(t) for all times, we are able to derive a constraint on the optical flow. Now we allow, that
the brightness along the path may change according to a time-dependent parameterised function
h(g0 , t, a):
g(x(t), t) = h(g0 , t, a),
(4.25)
where g0 = g(x(t0 ), t0 ) denotes the image at time t0 , and a is the Q-dimensional parameter
vector for the brightness change model. The total derivative on both sides of Eq. (4.25) yields the
generalised brightness change constraint equation (GBCCE),
(∇g)T f + gt =
d
h(g0 , t, a).
dt
(4.26)
which reduces to the well-known BCCE if h is constant.
Assuming smoothness in the vicinity of a = 0, h can be expanded by a Taylor series:
h(g0 , t, a) = h(g0 , t, 0) +
Q
X
k=1
ak
∂h
.
∂ak
(4.27)
Without loss of generality, we choose a parametrization such that a = 0 produces the identity
transformation: h(g0 , t, 0) = g0 . By differentiating Eq. (4.27), and assuming ak to be constant
46
4.4 Literature Review
through time within local windows of temporal support, we get an expression for the temporal
brightness variation:
Q
Q
k=1
k=1
X d ∂h
X
∂ dh
d
h(g0 , t, a) =
= (∇adh/dt)T a.
=
ak
ak
dt
dt ∂ak
∂ak dt
As h is analytic in a, we could exchange the order of differentiation. We have reached an expression for dh/dt, which is basically a scalar product of data vector ∇adh/dt and parameter vector
a, so that we can estimate the parameters by using a procedure such as total least squares (see
appendix A).
In the following some models of brightness variation are presented:
Linear source terms When sources are present, the brightness depends linearly on time: h =
qt, where q denotes the source strength. The GBCCE becomes
(∇g)T f + gt = q.
Exponential decay In relaxation processes the time-dependent brightness can be modeled by
an exponential decay: h = g0 exp(−κt), where κ denotes the relaxation constant. By differentiating h with respect to t the exponential function reproduces itself, so that we can write
for the GBCCE:
(∇g)T f + gt = −κg.
(4.28)
Diffusion process: Fick’s second law (see Eq. (2.8)), which states, that for isotropic diffusion
the time rate of change of the gray value is proportional to its Laplacian tells us, what the
GBCCE looks like in this case, where the diffusion constant D is the proportional constant:
(∇g)T f + gt = D∇2 g.
You can combine these physical models of brightness variation with all differential or tensorbased techniques: You have to replace the BCCE by the GBCCE, and the minimisation has to be
carried out over the optical flow f and over the parameters a simultaneously.
4.4 Literature Review
Each of the optical-flow methods (differential, frequency-based or tensor-based) presented in section 4.2 was adapted to evaluation in the field of fluid mechanics. The term “optical flow” suggests
its application to image sequences dealing with continuous tracer, like heat or concentration. Indeed, most of the literature addresses continuous tracer. On the other hand, in practice fluid flow
analysis is performed to a great extent particle-based (see chapter 3), which does not prevent
optical-flow based methods from being applied to. Examples can be found in the literature.
4.4.1 Differential Techniques
[Ruhnau et al., 2005b] applied the method of [Horn and Schunk, 1981] (see Eq. (4.7)) to images
recorded with the conventional PIV-technique, described in 3.3.1. They used a coarse-to-fine
strategy, as in section 4.3.2. The same authors replaced the global smoothness constraint by a
47
4 Optical-Flow Methods in Fluid Flow Analysis
prior, relying on the Stokes equation ([Ruhnau and Schnörr, 2006]) and on the vorticity transport
equation ([Ruhnau et al., 2006]).
Instead of using the BCCE, other authors used the 2D-continuity equation Eq. (4.4) as model.
This was justified by the special kind of recording technique (like transmittance imagery in
[Wildes et al., 2000]) or data (like satellite imagery in [Corpetti et al., 2002]). The latter used a
second-order div-curl-regularisation scheme instead of just assuming global smoothness. Moreover, they applied their scheme to PIV-sequences ([Corpetti et al., 2005]).
4.4.2 Frequency-based Techniques
[Larsen, 1998] applied the local energy distribution to satellite images using the frequency-based
techniques presented in 4.2.2. They used the optical-flow estimates together with confidence
measures as an input for a regularization method based on the Markovian random field approach.
4.4.3 Tensor-based Techniques
[Jehle et al., 2004] analysed the motion of sand grains in a geotechnical application, recorded using rigid endoscopes. Under certain conditions, the grains can behave like a fluid. [Garbe et al.,
2003] estimated both velocity vector field and heat flux simultaneously in image sequences,
recorded using infrared thermography. They expanded the structure tensor technique (4.2.3) by a
model including brightness changes (4.3.4).
48
5 Method of Two Wavelengths
In this section we will describe the basic concepts of our measurement technique. It is based
on retrieving 3D information from 2D data – the intensity (grey value) being the source of the
depth (i. e. the coordinate perpendicular to the image plane). For illustration see Fig. 5.1, left and
Fig. 5.2, left. In order to do this, we have to consider two tasks:
1. The intensity distribution provides a depth map, taking into account the medium between illumination source, object and camera (spectral absorbance), the properties of the illumination
(spectral characteristics), the attributes of the object (e. g. reflectance, object size), and the
camera characteristics (e. g. linearity of the camera sensor, pixel size). Schematically we
write:
g(x, y) → z(x, y) i. e. R2 × R → R2 × R
2. Once having obtained a depth map, we don’t have the whole 3D-information. This may be
because one part of the object may be occluded by another. Unlike in stereo vision, we acquire
images of the object only from one perspective, so that we have no chance to reconstruct
occlusions directly. But in some appropriate cases, we can infer the 3D-structure from using
spatio-temporal information. For example, we assume that the objects move smoothly. We
write:
z(x, y) → (x, y, z) i. e. R2 × R → R3
This chapter deals exclusively with the first task: The reconstruction of a depth map using
brightness data. How to incorporate higher level properties of the flow, to reconstruct the whole
3D-information, is addressed in section 7.6.
5.1 Method of One Wavelength
The precursor of our method originally was proposed by [Debaene, 2005] in the context of biofluidmechanics. To distinguish it by our method we will refer to it as the method of one wavelength.
Her intention was, to estimate the wall shear rate, which influences the properties of the fluid
(for the medical background we refer to section 8.4.1). For the calculation of the wall shear stress,
3D information about the flow field near the wall must be at hand. Therefore a measurement
technique, capable to acquire and process 3D-data has to be found.
Like in other tracer-based flow measurement methods, small, reflective, floating particles are
supplemented to the fluid. The tracer particles have to be exactly spherical, and their size distribution has to be narrow for reasons explained later. Unlike in particle imaging velocimetry
the fluid is illuminated voluminously by light of a specific spectrum. A dye is added to the fluid,
which absorbs light of a certain wavelength. The particles are recorded by a camera, which points
perpendicularly to the wall.
49
5 Method of Two Wavelengths
air
z
x
wall
position in
3D-space
I=Ip exp -z/z*
I0 I(z)
dyed
fluid
y
Ip=I0 exp -z/z*
particle
z
gray value
in 2D-image
Figure 5.1: Method of one wavelength. Left: 3D positions in space are obtained by analysing the grey
values in a 2D image. Right: A monochromatic beam of light penetrates the dyed fluid with the intensity
I0 , and hits the particle with intensity Ip after covering the distance z. After reflecting, it passes through
the dye again, and hits the camera sensor with the intensity I. The intensity decrease can be calculated
using Beer-Lambert’s law.
The dye limits the penetration depth of the light into the flow according to Beer-Lambert’s law.
The intensity Ip of the light approaching the particle is
Ip (z) = I0 exp −z/z̃∗ ,
(5.1)
where I0 is the light’s intensity before penetrating into the fluid, z is the distance of the particle’s
surface from the wall, and z̃∗ is the penetration depth (Fig. 5.1, right). The light is reflected by
the particle, and passes the distance z again, before approaching the wall with the intensity
I(z) = Ip exp −z/z̃∗ = I0 exp −2z/z̃∗ = I0 exp −z/z∗ ,
(5.2)
where an effective penetration depth z∗ = z̃∗ /2 was introduced for convenience. Within the
illuminated layer the particles appear more or less bright, depending on their normal distance to
the wall: Particles near the wall appear brighter, i. e. they have a higher grey value than particles
farther away from the wall. The correlation between the recorded intensity I(z) of a particle and
its distance to the wall, which is expressed in terms of the hypothetical intensity I0 of the particle
at the wall and z∗ , can be assessed experimentally.
The particle’s intensity I(z) is mapped to a grey value g(I(z)) by the procedure of imaging.
For simplicity, we assume, that the response curve of the camera is linear, i. e. we are allowed to
write
g(z) = g0 exp −z/z∗ .
(5.3)
Eq. (5.3) can be solved for the depth z as follows:
z = z∗ (ln g0 − ln g).
In order to eliminate the depth z in Eq. (5.3) we have to know:
50
(5.4)
5.2 Method of Two Wavelengths
position in
3D-space
z
x
air
y
I(z)
water surface
dyed
fluid
gray value
in 2D-image
(wavelength 1)
I2(z)
particle
I1(z)
z
gray value
in 2D-image
(wavelength 2)
Figure 5.2: Method of two wavelengths. Left: 3D positions in space are obtained by analyzing the grey
values in two 2D images (one for each wavelength). Right: Two monochromatic beams of light penetrate
the dyed fluid. Because their penetration depths differ, their intensities progress differently.
• The grey value of the particle at the surface g0 = g(z = 0). Therefore we require, that the
particles are exactly spherical, and that they all have to be of the same size. The latter can be
loosened by the requirement of a narrow size-distribution.
• The penetration depth z∗ of light of a specific wavelength into a certain medium. We have
to choose a light source, whose light is as monochromatic as possible, because then we can
exclude any hardening effects.
5.2 Method of Two Wavelengths
It turns out, that the greatest restriction of the method of one wavelength is the tightness of the size
distribution of the tracer particles. In order to use particles variable in size, we have to illuminate
with light of two distinct wavelengths (i. e. two different penetration depths: z∗1 and z∗2 ). We
write down Beer-Lambert’s law for each wavelength:
g1 (z) = g01 exp −z/z∗1
and g2 (z) = g02 exp −z/z∗2 .
We solve this equation system for the depth of the particle:
µ µ ¶
µ
¶¶
z∗1 z∗2
g1
g02
z(g1 , g2 ) =
ln
+ ln
.
z∗1 − z∗2
g2
g01
(5.5)
Note, that here the depth of the particle merely depends on the ratio of the intensities g01 /g02 ,
which is for all particles the same, and which can be calibrated.
Besides its applicability to systems with heterodisperse particles the method of two wavelengths has further benefits compared to the method of one wavelength:
• The particles can be imaged as streaks: The particle grey values g1 and g2 can be multiplied
by a common attenuation factor (depending on the exposure time), which cancels out while
calculating the depth according to Eq. (5.5).
51
5 Method of Two Wavelengths
Table 5.1: Comparison of the two methods of depth reconstruction with respect to their advantages and
disadvantages.
+
+
-
Method of one wavelength
easy setup
exploits full camera frame rate
very high requirements to particles
(monodispersity, shape)
+
+
-
Method of two wavelengths
applicable to heterodisperse particles
applicable to higher exposure times
greater
relative
error
in
depthdetermination
• We can loose the assumption of requiring exactly spherical particles. But we have to take
care, that the particles don’t rotate significantly from one frame to the other.
The advantages and disadvantages of the two methods of depth reconstruction proposed in this
chapter are summarized in table 5.1. Figure 6.9 in section 6.3.2 illustrates the different coverage
of the emission spectra of the light sources by the absorption spectrum of the dye.
5.3 Error Analysis
In table 5.1 we have noted, that compared to the method of one wavelength, the method of two
wavelengths has a greater relative error in depth-determination. In this chapter we will address
this fact in more detail by applying error propagation to formulas Eq. (5.4) and Eq. (5.5).
Method of one wavelength
First we will consider the more simple case of the method of one wavelength (see Eq. (5.4)). We
find for the error in depth ∆z according to error propagation:
sµ ¶
∆g
∂z 2
.
(5.6)
∆g 2 =
∆z(g) =
∂g
g
In our model the error in the grey value ∆g consists of two parts:
1. ∆g = c, where c is a constant: This is the case, if we have a grey value independent noise on
the chip. As a first approximation we can model the chip noise in this way.
2. ∆g = k(σ)g, where k is a function of the particle size, expressed by σ: Especially for small
σ in comparison to the pixel size the grey value of a particle depends on the position of its
centroid relative to the pixel grid. We will call this error grid error. The relation k(σ) is found
using simulations (see section 8.2.4).
Plugging this into Eq. (5.6) we get for the grey value-dependent error of the depth determination using the method of one wavelength following expression:
¶
µ
c
+ k(σ) .
∆z(g) = z∗
g
We define a relative error for the depth ez (g) by dividing the grey value dependent absolute error
by the grey value dependent absolute depth:
ez (g) ≡
52
∆z(g)
c/g + k(σ)
=
z(g)
ln g0 − ln g
0.5
4
0.4
3
∆z/z
z resp. ∆z [mm]
5.3 Error Analysis
z
2
1
0
0
∆z
0.3
0.2
0.1
20
40
g 60
80
100
0
0
20
40
g
60
80
100
Figure 5.3: Plot of z(g) resp. ∆z(g) (left) and ∆z(g)/z(g) (right) using the one-wavelength method with
the parameters z∗ = 1 mm, g0 = 255, c = 1, k(σ) = 0.1.
This means, that the overall error can be decomposed into two sums: One stemming from the
grey value-independent noise, decreasing with increasing g; and one stemming from the grey
value dependent grid error. From Fig. 5.3, right, we find, that the relative error for small grey
values is very large but approaches a minimum soon. (For demonstration, we have set c =
1, k(σ) = 0.1, z∗ = 1, g0 = 255.)
Method of two wavelengths
Now we treat the more complicated case of the method of two wavelengths (see Eq. (5.5)). Here
we find for the error in depth ∆z:
sµ
¯s
¯
µ
¶
¶2
¯ z∗1 z∗2 ¯ ∆g12 ∆g22
∂z
∂z 2
¯
∆g12 +
∆g22 = ¯¯
∆z(g1 , g2 ) =
+ 2 .
(5.7)
∂g1
∂g2
z∗1 − z∗2 ¯
g12
g2
After inserting the errors in the grey value ∆g1,2 = c + k(σ)g1,2 , and dividing by the grey
value dependent absolute depth we get for the grey value-dependent relative error for the depth
ez (g1 , g2 ):
p
c2 (1/g12 + 1/g22 ) + 2k 2
∆z(g1 , g2 )
=
ez (g1 , g2 ) ≡
z(g1 , g2 )
ln g1 − lng2 + ln(g02 /g01 )
The absolute error ∆z(g1 , g2 ) is influenced by the grey value-independent noise for g1 ≈ 0
(i. e. for deep particles). With g1 approaching g2 (i. e. for particles at the surface), the absolute
overall error approximates the grid error. Due to error propagation a grid error of k = 0.1 alone
causes an absolute error of ∆z ≈ 0.14. In Figure 5.4 the absolute errors are plotted for c = 1,
k = 0 (left), for c = 0, k = 0.1 (center) and for c = 1, k = 0.1 (right) using g01 /g02 = 1 and
requiring g2 ≥ g1 .
In Fig. 5.5 we plotted the relative errors for the same parameters as in the absolute errors. We
find, that the grey value-independent noise contributes to the overall relative error only for g1 ≈ 0
and for g1 ≈ g2 . In-between, the relative error is determined mainly by the grey value dependent
grid error, giving acceptable results for 10 < g1 < 2/3g2 .
53
5 Method of Two Wavelengths
g1
g1
∆z
g1
∆z
g1
error: ∆g=c+k(σ)g
g2
error: ∆g=k(σ)g
g2
error: ∆g=c
g1
g1
Figure 5.4: Plot of the absolute error ∆z(g1 , g2 ) using the two-wavelength method with the parameters
g01 /g02 = 1, c = 1, k(σ) = 0.1. On the left the absolute error stemming from ∆g = c is plotted; in
the middle, the absolute error using ∆g = k(σ)g is given; on the right both errors were combined. The
top row shows error maps, where the color map ranges from 0 to 0.3; the bottom row shows sections with
g1 = 50, g1 = 125 and g1 = 200.
54
5.3 Error Analysis
error: ∆g=c
error: ∆g=k(σ)g
200
200
150
150
150
g2
200
g2
250
g2
250
100
100
100
50
50
50
50
100
150
g1
200
250
50
100
150
g1
200
250
1
1
0.9
0.9
0.8
0.8
0.8
0.7
0.7
0.7
0.6
0.6
0.6
0.5
∆z/z
1
0.9
∆z/z
∆z/z
error: ∆g=c+k(σ)g
250
0.5
0.4
0.4
0.3
0.3
0.3
0.2
0.2
0.2
0.1
0.1
0.1
100
150
g1
200
250
50
100
150
g1
200
250
100
150
g1
200
250
50
100
150
g1
200
250
0.5
0.4
50
50
Figure 5.5: Plot of the relative error ∆z(g1 , g2 )/z(g1 , g2 ) using the two-wavelength method with the
parameters g01 /g02 = 1, c = 1, k(σ) = 0.1. On the left the relative error stemming from ∆g = c
is plotted; in the middle, the relative error using ∆g = k(σ)g is given; on the right both errors were
combined. The top row shows error maps, where the color map ranges from 0 to 0.5; the bottom row
shows sections with g1 = 50, g1 = 125 and g1 = 200.
55
5 Method of Two Wavelengths
56
6 Hardware Components
In this chapter the various components of the data acquisition hardware used to record image
sequences are explained. We will start with the objects of interest, in this case small particles,
which are suspended in the fluid (section 6.1). Then we will have a closer look to the medium,
the light has to pass on its way to the object (section 6.2). In section 6.3 the light sources are
explained, as well as the characteristics of the light itself. The last section is devoted to the
optical system, to the CCD-cameras used in the experiments and to the triggering of the individual
components.
6.1 Particles as Tracer
Like PIV or PTV (see section 3.3), our method is based on determining position and velocity of
small particles, which are added to the fluid. These have to fulfill following requirements:
Flow representation The particles ideally have to be capable to follow the fluid, must not
affect the motion of the fluid and must not influence each other. Important parameters are:
particle size, specific weight of the particles, hygroscopicity of the particles and particle density. This is a common property, which all particle-based fluid measurement methods have to
satisfy.
Visibility The contrast between the particles and the surrounding has to be high. Their imaged
grey value has to be significantly over the noise level of the camera. Therefore the particles
have to be imaged as brightly as possible. The visibility depends on the reflectance of the
particles and on the particle size. Regarding particle size, we have to find a trade-off of flow
representation and visibility, which depends on the observed flow and on the experimental
setup.
Scattering properties Our method is based on relating the imaged brightness to the position
of the particle according to Beer-Lambert’s law. A basic requirement to do so, is, that we
operate in the geometric scattering range, which provides a requirement for the particle size
in relation to the light frequency. Moreover the shape of the particles is affected, which has
to be spherical.
6.1.1 Scattering Properties
Light of a certain wavelength is scattered by a spherical particle of a certain diameter. We ask for
the differential scattering cross-section which is defined as the scattered intensity per solid angle
element.
1
Mieplot: A computer program for scattering of light from a sphere using Mie theory and the Debye series;
http://philiplaven.com/MiePlot.htm
57
6 Hardware Components
λ=500nm;
a=40nm;
q=0.5
λ=500nm;
a=400nm;
q=5
λ=500nm;
a=4µm;
q=50
λ=500nm;
a=40µm;
q=500
Figure 6.1: Polar diagrams of the normalized intensities of perpendicular (dark green) and parallel
(light green) polarised light, scattered at small, spherical particles according to Mie’s theory. The light
approaches the particle from the left. The particles have the refractive index of water surrounded by
vacuum (n = 1.3377). Left column: Linear representation for various q. Right column: Logarithmic
representation (plotted using Mieplot 1 ).
58
6.1 Particles as Tracer
Optimage
3M
Potters
100µm
100µm
100µm
Figure 6.2: Microscopically taken pictures of the particles, we used in our experiments.
For solving this problem, we make use of classical electrodynamics, starting with Maxwell’s
equations. We make the assumptions of a planar wave being scattered at a homogeneous sphere.
The general solution was found by [Mie, 1908] for the first time, and is given in [van de Hulst,
1981] in detail. We find, that for a given index of refraction, the angle-dependent scattered
intensity is solely a function of normalised diameter q, which is defined as
q=
2πa
,
λ
where a is the radius of the particle, and λ is the wavelength of the incoming light. In Fig. 6.1 we
printed polar diagrams of the scattered intensity for some selected q. It can be seen, that there is
a strong dependence of the scattered light on the angle of observation.
For q ≪ 1, Mie’s solution tends to Rayleigh’s approximation: The light scattered by 90°is
fully polarized into the direction orthogonal to the image plane, whereas in forward and backward
direction (0°resp. 180°) both polarisation directions contribute equally.
With increasing q there are two trends, apparent from the polar diagrams (Fig. 6.1). Firstly, forward scattering develops. Secondly, there are emerging further maxima. Their number increases
with rising normalized diameter approximately linearly. Exact measurements of the angular distribution of intensity allow a determination of the size of the spheres.
For q ≫ 1, Mie’s solution tends to the geometric limit: We can abstract parts of the wave
(which are of width much larger than λ and yet small compared to the radius a of the sphere)
as light rays. The rays can be traced, considering reflection and refraction according to Snell’s
law and Fresnel’s formulas. There are exceptions in or near a focal line (or a focus), where the
intensity according geometric optics would tend to infinity. Examples are rainbows and glories.
6.1.2 Particle Characteristics
In table 6.1 at the ending of this chapter we listed the types of particles, we used in our experiments. Because the normalised diameter q is much greater than unity for all particles, we are in
the geometric range as a good approximation. That allows to neglect the effects of Mie scattering.
In the introduction to this section we mentioned, that one important characteristic for using
particles as tracer is, that they have to follow the fluid optimally. Stokes law of resistance tells us
the drag force D, which is exhibited on a sphere of diameter d in a fluid with dynamic viscosity
µ:
D = 3πµdUs ,
59
6 Hardware Components
Tartrazine
New Coccine
4
2.5
1
0.5
0.5
0
200
1.5
2
1
1.5
*
1
1.5
z* [cm]
1.5
2
2
3
α [(mol/l)−1 cm−1]
2.5
α [(mol/l)−1 cm−1]
x 10
2
z [cm]
4
x 10
3
1
0.5
0.5
300
400
500
600
0
700
0
200
300
400
500
600
0
700
Figure 6.3: Spectra of tartracine and new coccine. The absorption (black) resp. the penetration depth
(red) is plotted against the wavelength (in nm). The penetration depths are calculated from the measured
spectra using concentrations ctartracine = 200mg/l = 374µmol/l and cnew coccine = 100mg/l = 165µmol/l.
where Us is the difference of the particle velocity and the fluid velocity, the velocity slip. By
equating the drag force with the inertia force F = 1/6π(ρ′ − ρ)d3 a of a particle with density ρ′
and acceleration a in a fluid with density ρ, we have for the velocity slip:
Us = d 2
ρ′ − ρ
a.
18µ
The velocity slip is influenced by the particle diameter in a much greater way (quadratically), than
by the difference in density (linear). Assuming an acceleration of 10 cm/s, we have a velocity slip
for the Optimage particles of about 1 µm/s, for the 3M particles of about 20 µm/s and for the
Potters particles of about 50 µm/s.
On the other hand, larger particles appear much brighter to the camera than smaller ones. Again
the dependency is quadratic in diameter. This was the reason, why we preferred the spheres by
Potters compared to the Optimage particles and spheres by 3M in the convection tank experiments, where we have only slow flows.
Figure 6.2 shows magnified images of the particles. We see, that compared to the spheres by
3M and by Potters, the Optimage particles appear very unshaped. For conventional PIV this is
irrelevant, but for our application it might become a serious problem: Because we estimate depth
from the apparent intensity, and because intensity depends on the projected size of the particle,
we get an error of estimation, if the particles rotate between two recordings. For that reason, we
used the 3M spheres in favour of the Optimage particles for the falling film experiments.
6.2 Dye as an Absorber
A dye works as an absorber of light emitting a specific wavelength spectrum. We demand, that
the absorption follows Beer-Lambert’s law, which is explained in detail in section 6.2.1. Further
60
200
1000
150
Electrical
High-Pressure
Sodium
Discharge Lamps
Light Emitting
Metal Halide
100
Diodes
Best LED
Fluorescent
Mercury Vapor
50
Tungsten-Halogen
ed
ap
Sh
to
ec
efl
R
rs
White LEDs
LEDs
Incandescents
0
1920
Flux/Package (lumens)
Luminous Efficiency (lm/W)
6.2 Dye as an Absorber
100
TM
Luxeon
10
1
Conventional LEDs
0.1
0.01
Conventional Incandescent
0.001
1940
1960
1980
2000
2020
1960
1970
Year
1980
1990
2000
2010
Year
Figure 6.4: Left: The luminous efficiency (in W/lumen) of LEDs has outperformed several other light
sources in the recent past. There exists hope for further improvements. Right: The luminous flux per
package for LEDs has increased exponentially in the recent 30 years.
requirements are, that the dye is not toxic and is not harmful to the environment, that its dosage
is simple, that it is soluble in water and that it is inexpensive - at least, if we want to use it in
voluminous tanks.
6.2.1 Beer-Lambert’s Law
Bougner (1729) and Lambert (1760) found, that the attenuation dI of light intensity, which traverses a clear medium, is proportional to the particular intensity I(z) and to the traversed layer
thickness dz:
dI = k(λ)I(z)dz,
with the wavelength-dependent proportionality constant k(λ).
This holds for certain conditions only:
• The incoming light has to be monochromatic and collimated.
• The absorbing molecules have to be dispersed homogeneously in the fluid. They must not
scatter and must not exhibit any self-interaction.
• Scattering and reflexion on the surface of the probe result in a light attenuation, which masks
the effect of absorption, and therefore should be excluded.
Beer showed 1852, that for most solutions of absorbing substances the proportionality constant
k on its part is proportional to the concentration c of the particular substance. Beer-Lambert’s law
follows:
dI = −α(λ)cIdz,
where the proportionality constant α is known as the molar absorption-coefficient with dimensions [α]=(mol/l)−1 cm−1 . We can integrate this to
I(z) = I0 exp(−α(λ)cz) = I0 exp(−A(λ)),
where I0 is the incoming intensity, and A(λ) is called absorbance. The latter is a nondimensional
quantity. We arrive at our familiar formulation of Beer-Lambert’s law Eq. (5.1), if we introduce
61
6 Hardware Components
Postively charged
carriers
Semiconductor
bandgap
Negatively
charged carriers
_
+
Figure 6.5: Left: Physical function of a LED. Right: Low power vs. high power LED.
the penetration depth z∗ , defined as
z∗ (λ) =
1
.
α(λ)c
Visibly spoken, the penetration depth is the depth, where the intensity is attenuated to 1/e th of its
original value.
6.2.2 Example Spectra of Dyes
There are many dyes to choose from. A good overview is given in [Green, 1990]. The most
important criterion for our decision was, that the spectrum of the dye should match to the spectra
of the light sources (section 6.3). I. e., that the penetration depths with respect to the LEDspectra differ about a factor 2-5, and that the dye exhibits a good absorption in the considered
spectral range. Further criteria were solubility, manageability and price of the dye. We found two
dyes, which are broadly used in industry (for example to dye textiles and food): The yellow dye
Tartracine, which has its peak of absorbance at 425 nm, and the red dye New Coccine, which has
its peak of absorbance at 406 nm. An overview over the characteristics is given in table 6.2; the
spectra are printed in Fig. 6.3 together with their chemical structure formulas.
6.3 LEDs as Light Sources
In contrast to conventional PIV, in our experiments we cannot illuminate using laser light sections.
The reason is, that our measurement method is based on observing particles in a volume, not just
in a slice. Light emitting diodes (LEDs) have established themselves as small, reliable, efficient
and bright light sources during the last years.
62
6.3 LEDs as Light Sources
relative spectral sensitivity
1
0.5
400
500
600
700
wavelength [nm]
Figure 6.6: Left: Photopic (vision using the cone-receptors at high intensities, red) and scotopic (vision
using the rod-receptors at low intensity, green) luminosity functions. The photopic curves include the CIE
1931 standard (solid), the Judd-Vos 1978 modified data (dashed), and the Sharpe, Stockman, Jagla &
Jägle 2005 data (dotted). Right: LEDs we used in our illumination setups. Luxeon III Emitter, C11A1,
Luxeon Star O (from left to right)
There were two developments in technology, which made LEDs feasible for our application:
Firstly, LEDs have become more and more powerful over the last decades. The luminous flux per
package has doubled every 18 to 24 months over the recent 30 years in accordance with Moore’s
law. Moreover the luminous efficiency of LEDs has already outperformed that of incandescents
and mercury vapor lamps, and is expected to exceed that of fluorescent lamps and electrical
discharge lamps, soon (Fig. 6.4). Secondly in the mid-1990s green and blue LEDs became available, which made use of the wide band gap of the semiconductor indium gallium nitride InGaN
[Nakamura et al., 1994]. Now lighting in the spectral range beyond 600 nm became possible.
6.3.1 Physical Function of High Power LEDs
Figure 6.5, left, shows a simple sketch of the physical function of a LED. Like a normal semiconductor diode, a light emitting diode consists of a chip of semiconducting material doped with
impurities to create a structure called a p-n-junction. Current flows easily from the p-side (anode)
to the n-side (cathode), but not in the reverse direction. When an electron meets a hole, it falls
into a lower energy level, and releases energy in the form of a photon. The wavelength of the
emitted light depends on the band gap energy of the materials forming the p-n-junction.
Because high power LEDs produce more heat, it was necessary to change their design. Historically, most LEDs have been constructed in a manner similar to the 5 mm type illustrated in
Fig. 6.5, top. The base pins serve as both electrical and thermal conduits, which limit how much
light can be produced. Fig. 6.5, bottom, represents a standard high power LED (like the Luxeon
III Emitter). The large metal slug dramatically improves heat transfer characteristics. This in turn
allows for higher current, a larger light emitting surface and proportionally higher light output.
The thermal resistance of the high power units is ten times lower than that of a conventional 5 mm
LED.
63
6 Hardware Components
1
1
0.8
0.8
relative intensity
relative intensity
Figure 6.7:
Spectra of the Luxeon III Emitter-family as displayed in the data-sheet
(http://www.lumileds/pdfs/DS45.PDF).
0.6
0.4
0.2
0.4
0.2
430
440
450
460
470
480
wavelength [nm]
490
500
510
0
440
520
1
1
0.8
0.8
relative intensity
relative intensity
0
420
0.6
0.6
0.4
0.2
0
420
450
460
470
480
490
500
wavelength [nm]
510
520
530
540
450
460
470
480
490
500
wavelength [nm]
510
520
530
540
0.6
0.4
0.2
430
440
450
460
470
480
wavelength [nm]
490
500
510
520
0
440
Figure 6.8: Measured spectra of the LEDs we used for our illumination setups. We grouped the LEDspectra in each case in two major divisions (indicated with blue and red color). Top: Luxeon III Emitter.
Bottom: Luxeon Star O. Left: royal blue (455nm). Right: blue (470nm).
6.3.2 Selection of LEDS, Used in Our Experiments
The LEDs we used in our experiments are listed in table 6.3 and displayed in Fig. 6.6, right.
The semiconductor material of all LEDs is gallium indium nitride (InGaN), and we chose four
wavelengths, which have arisen as compatible with the selected dyes. In the table we specified the
energy flux (photom. luminous flux) Φ, which can be derived by the radiation energy (photom.
64
6.3 LEDs as Light Sources
relative spectral power distribution
1
0.8
0.6
0.4
0.2
0
380
400
420
440 460 480
wavelength [nm]
500
520
540
Figure 6.9: Measured Luxeon III Emitter spectra (royal blue and blue) together with the absorption
spectrum of tartracine (yellow).
quantity of light) Q by:
Φ=
dQ
dt
[Q] = J(radiom.)/lms(photom.) [Φ] = W(radiom.)/lm(photom.).
The transition of radiometric to photometric units is made by relating the characteristics of the
energy units to their response of the human eye. The response functions are displayed in Fig. 6.6.
With their help and the note, that at a wavelength of 555 nm the energy flux of 1 W equals to
the luminous flux of 683 lm (specification for photopic vision), the radiometric quantities can be
transformed into photometric quantities.
A rough description of the form of the spectra is given in the data-sheet (Fig. 6.7, left). Having a
closer look to the spectra of the LEDs using a spectrometer of the type “USB2000”, manufactured
by Ocean Optics, one will see, that the spectra differ in a significant way. We grouped the spectra
of the Luxeon III Emitter LEDs and of the Luxeon Star LEDs (each royal blue and blue) in
such a way, that shape of the spectra resemble, and the maxima are roughly the same (Fig. 6.8).
We found, that LEDs of the same charge have similar spectra. The heterogeneousness among
different charges may be explainable due to variations in manufacturing. From now on we have
taken care, that we ordered only LEDs of the same charge, so that it would not be necessary to
acquire the spectra of the LEDs and to group them.
Fig. 6.9 shows the absorption spectrum of the Tartracine dye (yellow) together with the measured emission spectra of the Luxeon III Emitter LEDs. The spectrum of the royal blue LEDs
have a greater overlap with the dye-spectrum than the blue LEDs. That means, that the penetration depth z∗,royal blue of light stemming from the royal blue LEDs is shorter than the one
stemming from the blue LEDs z∗,blue . Exploiting this property, it is possible to reconstruct the
depth of particles variable in size (see section 5.2).
65
6 Hardware Components
LED1 LED2 LED3
LEDN
RJC,3
RJC,N
RCA
RJC,2
RJC,1
TJ
Cooling Unit
RJC,ges
TA ambience
Figure 6.10: Left: Profile of selected cooling units and diagram: thermal resistance in dependence of
length of cooling units2 . Right: Schematic diagram of thermal resistances occurring in our (simplified)
system.
6.3.3 Cooling of the LEDs
Though the LEDs were designed for operation under high-current-conditions, it is necessary to
attach heat sinks to keep the temperature of the p-n-junction constantly below its critical level.
To estimate a limit for the minimum length of the cooling units, it is instructive to calculate
the thermal resistance of the combined system: light emitting diodes (which can be regarded as
connected parallely) and heat sinks.
Generally for the overall thermal resistance RJA holds:
RJA =
∆T
,
Pges
where ∆T = TJ − TA is the temperature drop between junction and ambiance, and Pges is the
dissipated power. The overall thermal resistance can be calculated using the same rules as for the
computation of an electrical resistance (see the schematic diagram Figure 6.10, right):
RJA = Pn
1
1
i RJC,i
+ RCA .
Because all the thermal resistances in the LEDs are equal, we let RJC,i = RJC . We get for the
thermal resistance in the heat sink:
RCA =
TJ − TA RJC
−
.
P
N
In the most powerful version of our illumination units (Fig. 6.11, top) we maintain 20 LEDs, each
receiving a power of Pi = 0.6 A×4 V = 2.4 W, so that the overall power to be dissipated becomes
approximately Pges = 45 W, assuming a degree of efficiency of roughly 5%. With a junction
temperature of TJ = 135°C, according to manufacturer information, and an ambient temperature
of TA = 35°C, RCA becomes approximately 1.6 K/W, using the vendor’s specification of the
thermal resistance for each LED: RJC = 13 K/W3 . By having a closer look at diagram Fig. 6.10,
66
1
59
6.3 LEDs as Light Sources
0
5
65
125
7
9,50
7
0,00
0
4
3,
9
3
1
3
4,50
-
54
0
5
40˚
,9
0
5 6
20
7
6,
82
0
1,
47
3
6,
0
60
,8
9
7 9
23.22
0
32.41
40
2 x 16 holes with diameter 11mm and depth 1mm
45
30˚
29
60˚
23.69
45˚
7.78
0
31
45
40
23.22
0
39.50
20
A-A
0
3˚
13
14
Figure 6.11: Technical drawings and photographs of the four different illumination-setups which were
designed for the experiments. From top to bottom: 1.) Luxeon III Emitter-LEDs (20× royal blue, 20×
blue). 2.) Luxeon Star O-LEDs (18× royal blue installed, space for 12× blue). 3.) C11A1-LEDs (16×
blue, 16× cyan). 4.) Luxeon III Emitter-LEDs with additional optics (5× blue, 5× cyan).
67
6 Hardware Components
CCD-chip
fc
camera
camera
camera optics
aperture
camera optics
fa
aperture
achromatic lens
illumination
fa
achromatic lens
illumination setup
with electronics
water surface
Figure 6.12: Sketch (left) and photograph (right) of the imaging setup used for the falling-film measurements.
center, we can estimate the minimum length for the cooling units of about 120 mm. To conclude,
passive cooling is possible; but we have to take care, that we choose the heat sinks properly.
6.3.4 Light Sources
Fig. 6.11 shows the various designs of illumination units, that were built for our experiments. We
will describe them from top to bottom:
1. The most powerful version contains 2×20 LEDs of type Luxeon III Emitter each being maintained at 2.4 W (only 60% of the maximum possible power). Blue and royal blue LEDs are
arranged alternately and circularly on the planar heatsinks. An aluminium mirror is used to
yield a flat illumination field, which covers a circular disc of 100 mm in diameter. This light
source provides a very homogeneous illumination over a relatively large area. It is used both
for the falling film experiment and for the convection tank experiment.
2. In the next illumination unit we wanted to make use of the colliminator lens of the Luxeon
Star O LEDs. In order to attain high luminance, we had to incline the cooling units about
50°to the optical axis. The useable area is sufficient for 18 royal blue LEDs and 12 blue
LEDs (each being maintained at 1 W). The lighting is very inhomogeneous. Homogeneity
might be improved using a diffuser.
3. The C11A1-LEDs are not radiating isotropically, but they emit their light in a cone of about
30°. We arranged these light sources in two annuli, each consisting of 2×8 LEDs. Because
these LEDs were not available in royal blue, we chose the colour cyan instead. Each LED
consumed the power of 0.6 W. Because the area of tolerable homogeneity is relatively small
(about 2 × 2 cm2 ), this light source is installed in the falling film experiment only. Its irradiance is considerably higher (factor 2), than the one of source 1.
3
For an exact calculation of the overall thermal resistance the thermo-glue, needed for the attachment of the LEDs,
has to be considered. Here we assume, that the glue-layer is very thin.
68
6.4 Imaging Setup
αmax
sensor
aperture
objective
lens
αmax
r
f2
f2
telecentric
lens
f1
f1
d
Figure 6.13: Setup of telecentric optics. αmax denotes the maximum of the angles enclosing the horizontal
and the rays, which leave the object and pass the aperture.
4. In contrast to the three previous ones, constructing the last one, we wanted to keep a distance
of about 60 cm from the illumination setup to the object. This was possible using small planarconvex lenses, which were attached in front of the Luxeon III Emitter diodes. Here we used
2×5 LEDs (royal blue and cyan) each being maintained at 2.4 W. This light source is used in
the convection tank. It provides a circular homogeneous area of about 5 cm in diameter.
6.4 Imaging Setup
In the previous chapters we dealt with the objects, the medium and the light, that pervades the
medium. Here we will focus our interest onto the recording system, which is responsible for
getting 2D image sequences from the 3D world. Like any other conventional digital sequence
imaging system, our setup consists of optical components, a digital camera which is connected
to the computer hardware, and electronics, which is responsible for synchronizing the individual
processes. Figure 6.12 shows the arrangement of the various components used for the falling-film
measurements: illumination, telecentric lens, aperture, camera optics and high-speed camera.
The image acquisition unit employed in the convection-tank experiment is displayed in Fig. 8.13
in section 8.3.2. Instead of a telecentric optics a conventional objective was used, which was
designed for the combination with high resolving cameras.
6.4.1 Telecentric Optics
In the setup for the measurement in a falling film we used telecentric optics. This has the advantage, that the imaged lateral dimensions of the measured object are not dependent on their
distance from the camera sensor. The telecentric mapping is a parallel projection.
69
6 Hardware Components
Figure 6.14: Quantum efficiency curves of the Photonetics CamRecord 2000-camera (left) and of the
Basler A641f-camera (right, yellow curve).
In an ideal telecentric system, the infinitesimal narrow aperture is located in the focal plane of
the large telecentric lens, so that only parallel principal rays can pass the aperture. The objective
lens is adjusted to an infinitesimal object distance, so that the camera sensor lies in the focal plane
of the objective. Figure 6.13 shows the setup of the telecentric optics in our system.
In order to obtain an image, the radius r of the aperture is allowed to be finite. Given the focal
length of the telecentric lens f1 , we can calculate for the maximum inclination of the rays αmax
(see also [Rocholz, 2005]):
r
tan αmax = .
f1
Here we made use of the fact, that parallel rays are focused in the focal planes, so that the two
triangles containing αmax are congruent. By inserting the definition of the focal ratio of the
objective 2k ≡ f2 /r, we can calculate the angle αmax using well-known quantities:
tan αmax =
f2
.
2kf1
It turns out, that because f1 ≫ f2 and k = O(10) in our system, αmax is a very small angle
(typically of order 1°), so that the conditions for telecentricity are met in good approximation.
The focal length of the compound system f12 is approximately the focal length of the telecentric lens, because
1
1
1
d
1
1
f1
1
=
+
−
≈
+
− f2 = ,
f12
f1 f2 f1 f2
f1 f2 f1
f1
where we assumed, that the aperture is close to the objective.
6.4.2 CCD- and CMOS-Cameras
Cameras with sensors based on CCD (charge coupled device)-technology or CMOS (complementary metal oxide semiconductor) technologies have become a de facto standard in acquisition of
spatio-temporal data. Both image sensors accumulate signal charge in each pixel proportional to
the local illumination intensity. When exposure is complete, a CCD transfers each pixel’s charge
70
6.4 Imaging Setup
1
0.8
0.6
0.4
0.2
0
Figure 6.15: Pixelwise variance in the dark image of the Photonetics CamRecord-chip (left) and of a
clipping of the Basler A641f-chip (right).
packet sequentially to a common output structure, which converts the charge to a voltage, buffers
it and sends it off-chip. In a CMOS imager, the charge-to-voltage conversion takes place in each
pixel. This difference in readout techniques has significant implications for sensor architecture,
capabilities and limitations.
We are mainly interested in an optimal solution regarding speed, resolution, quantum efficiency, fill factor, noise and dark current. In the following we will compare the characteristics of
CCD and CMOS regarding those topics:
Speed and resolution Especially for highly turbulent flows it is important to record image
sequences using high resolution and high speed. Regarding speed CMOS-sensors have an advantage over CCDs because all camera functions can be placed on the image sensor. Furthermore, it is easily possible to adapt the resolution of a CMOS-chip to ones need by partitioning
the pixel-array almost arbitrarily. The product of frame rate and number of pixels remains
constant. When you downscale a CCD-camera, you always have a decrease of the product of
frame rate and number of pixels. For example the Basler 641f: 14 fps at 1624×1234 but only
30 fps at 640×480 (with CMOS-technology, 90 fps at the small resolution were possible!).
Moreover, using CCDs you are restricted to a couple of predefined modes.
Quantum efficiency and fill factor Because of the small exposure times, which occur acquiring high speed images, and the limited brightness of the light sources, we demand a high
sensitivity (respectively a high quantum efficiency) of the camera sensors in the interesting
spectral range. The quantum efficiency (QE) is a measure of the ratio of collected electrons
to incident photons. Here both sensor types are roughly on the same level, whereas the fluctuations of QE among one type may be enormous. This is due to the fact, that not only the
base material (in both cases silicon and silicon oxide) is important for the efficiency of the
photoelectric effect and for carrier separation, but also the composition and thickness of the
top layers are crucial for the QE. The fill factor, defined as the ratio of light-sensitive area to
total pixel size, also determines the maximum achievable sensitivity. For example transistors
or a lateral anti-blooming drain reduces the photosensitive area of a pixel. This is overcome
71
6 Hardware Components
by illuminating the chip from behind, or by attaching microlenses on the chip.
Noise and dark current Especially working under poor lighting conditions, noise contributes
to the acquired gray values seriously. Because our technique is based on processing the grey
values directly or indirectly, special care has to be taken to avoid camera noise. CMOSsensors generally suffer from fixed pattern noise, because the charge-to-voltage conversion
takes place in each pixel there. CCD-sensors don’t exhibit fixed pattern noise, at least, if the
whole sensor is maintained by the same gain. Other noise sources are electronic noise and
dark current noise. Here CCDs are of advantage, too.
For our experiments we have chosen two cameras:
• In the falling film experiment, we deal with comparably high fluid velocities - thus the frame
rate has to be sufficiently high, and the sensitivity of the camera has to be sufficient. The
Photonetics Cam-Record 2000 camera achieves frame rates up to 1000 fps (at a resolution of
512 × 512 pixels2 ). Its QE is about 60% in the interesting spectral range (Fig. 6.14, left),
and because the chip is backside illuminated the fill factor is virtually 100% (more technical
data in table 6.4). Though working according to the CCD-principle, the camera sensor is
divided into 16 individually controlled segments, which results in a kind of “fixed pattern
noise”. Moreover the boundaries of the segments exhibit a different noise characteristic in
turn (Fig. 6.15, left). The camera works with a proprietary framegrabber board, installed in
an extra PC, which is capable to record 2000 images at full resolution. The acquired image
sequence can be transfered to mass storage using 100 MByte/s ethernet.
• In the convective tank experiment the fluid velocities are typically two orders of magnitude
smaller compared to those in the falling film experiment. Thus, we can choose a camera
with an excellent, high resolving sensor, neglecting the high-speed requirement. We chose
the Basler A641f camera, because of the high resolution of 1624 × 1234 pixels2 and the good
chip characteristics (see Fig. 6.14, right and Fig. 6.15, right).
6.5 Triggering Electronics
The illumination setup has to be synchronised with the data acquisition. Both used cameras allow
for external triggering via a TTL-signal. Figure 6.16, left, shows, how triggering works: Special
hardware, the Data Translation Card4 , triggers the LED-arrays and the camera using one and the
same clock. In the upper left corner one possible triggering scheme is shown: The camera-trigger
T1 forces the basic frequency. T2 and T3 alternatively trigger the two LED-arrays, so that the
frequencies are in each case half of the basic frequency. The duty cycles are chosen to 25%. This
is just one example - we can realise every thinkable triggering scheme as long as the frequencies
are consistent with the clock frequency (100 MHz).
Figure 6.16, right, shows a schematic circuit diagram, explaining the triggering of the LEDs.
The signal of the Data Translation Card is amplified by the op amp LT1211, using an external
power supply. Switching is performed by a MOSFET-transistor.
4
Data Translation Inc., Marlboro, Massachusetts; http://www.datex.com
Optimage Ltd., Edinburgh, Scotland, United Kingdom
6
3M Speciality Materials, Zwijndrecht, Belgium; http://www.3m.com
7
PQ Potters Europe GmbH, Kirchheimbolanden, Germany; http://www.potterseurope.org
5
72
6.5 Triggering Electronics
0
1
2
t[ms]
Personal Computer
ULED
Utrigger
T1
T2
2.7kΩ
T3
10nF/22nF
(to suppress
oscillations)
PCI
LAN/FireWire
Trigger 1
470nm
455nm
I=0.6A, 10LEDs
U~40V
Trigger 2
Trigger 3
Data Translation Card
Camera
+
1kΩ
-
S
G
MOSFET IRF510/IRF520
D
1kΩ
op amp LT1211
gain bandwidth product: 14MHz
slew rate: 7V/µs
R
Figure 6.16: Left: Triggering of the illumination unit and of the camera. In the upper left corner, one
possibility of timing is displayed. Right: Schematic circuit diagram. The 5V-signal is amplified using a
LT1211 op amp, switching is performed by a MOSFET-transistor
8
Philips Lumileds Lighting Company, San Jose, California; http://www.lumileds.com
Roithner Lasertechnik GmbH, Vienna, Austria; http://www.roithner-laser.com
10
Photonetics GmBH, Kehl, Germany; http://www.photonetics.com
11
Basler AG, Ahrensburg, Germany; http://www.baslerweb.com
9
73
6 Hardware Components
Table 6.1: Different kinds of tracer particles we applied in our experiments. The normalized diameter q
was calculated using a wavelength of λ = 500µm.
vendor
Optimage5
3M6
Potters7
material
polystyrene
hollow glass spheres
silver-coated hollow
ceramic spheres
ρ [g/cm3 ]
1.020
0.60
0.9/1.1
d [µm]
30 ± 6
30
100
q
200
200
500
refr. index
1.602
1.471
opaque
sphericity
highly aspherical
exactly spherical
roughly spherical
Table 6.2: Two selected dyes: Tartracine (Acid Yellow 23) and New Coccine (Acid Red 18 or Victoria
Scarlet 3R).
name
Tartracine
New Coccine
chemical formula
C20 H9 N4 Na3 O9 S2
C20 H11 N2 Na3 O10 S3
mol. weight
534.37
604.48
appearance
orange powder
maroon powder
λmax
425 nm
506 nm
sol. in water
300 mg/ml
80 mg/ml
Table 6.3: Different LEDs used for illumination.
name
Luxeon III Emitter8
(lambertian, operating at 1000 mA)
material
InGaN
Luxeon Star O8
(lambertian, with colliminator, 350 mA)
C11A19
(viewing angle 30°, 350 mA)
InGaN
InGaN
peak wavelegths
royal blue (455 nm)
blue (470 nm)
cyan (520 nm)
royal blue (455 nm)
blue (470 nm)
HB-30 (470 nm)
HC-30 (505 nm)
lum. flux
20 lm
30 lm
80 lm
10 lm
16 lm
4.5 lm
10 lm
energy flux
450 mW
480 mW
165 mW
220 mW
260 mW
70 mW
35 mW
Table 6.4: The two cameras, we used in our experiments.
name
Sensor type
Sensor optical size
Pixel size
Maximum resolution
Maximum frame rate at full resolution
Video output type
Video output format
Maximum quantum efficiency
Dimensions
Weight
74
Camrecord 200010
CCD frame transfer, backside
illuminated
9.2 mm (100% fill factor)
18 × 18µm
512×512 pixel
1000 fps
Basler A641f11
Sony ICX267 AL/AK, progressive scan CCD
1/2 inch
4.4 × 4.4µm
1624×1234 pixel
14 fps
proprietary/Fast Ethernet
8 bits/pixel
70% 600 nm
90 × 120 × 140mm
1600 g
IEEE1394a
8 bits/pixel, 12 bits/pixel
50% 500 nm
75 × 44 × 29mm
110 g
7 Image Processing
Every quantitative image-based measurement technique includes some kind of image processing, fulfilling the tasks of preprocessing, segmentation, feature extraction, motion analysis and
postprocessing.
In this chapter we present the image processing in a chronological order, i. e. starting with
preprocessing and ending with postprocessing. The input data are the image sequences obtained
from the hardware setup, which we already described in chapter 6. Basically these consist of
a number of recorded images containing both objects and background. Furthermore, the grey
values in the images do not represent necessarily the corresponding light intensities, and the
lighting may be inhomogeneous. These topics, which include radiometric calibration, illumination correction and background subtraction, are addressed in the first section of this chapter. In
section 7.2 we will extract specific kinds of features, we will use in the further analysis: namely
the centers of gravity and the representative grey value1 of the distinct particles. Another feature
is the grey value pattern itself. It has already been “extracted” in the preprocessing procedure.
These features work as an input for the estimation of the particles’ 3D velocity (section 7.3) and
3D position (section 7.4). Particle trajectories, which are generated using temporal information
(section 7.5), work as an input to the task of postprocessing, described in section 7.6, which
consists of elongation and smoothing of the trajectories and of the calculation of a dense vector
field.
7.1 Preprocessing
The major intention of preprocessing is to prepare the image sequences for the feature extraction
step, i. e. to condition the images in such a way, that the later analysis routines do not depend
where they are applied in the image locally, but that we have to set global thresholds only, and
that the number of thresholds can be reduced to a minimum. Acquiring real data always accompanies with imperfections of the measurement setup, like non-linearities and fixed-pattern-noise
in the camera (section 7.1.1), inhomogeneities in the illumination (section 7.1.2) or objects in the
background, which disturb the interesting features (section 7.1.3). We will have a closer look to
the problems occurring in our experiments and show approaches to cope with them.
7.1.1 Simultaneous Radiometric Calibration and Illumination Correction
This section deals with the case of the Photonetics Camrecord 2000 high-speed camera; the Basler
A641f camera sensor exhibits very good noise and linearity characteristics, so that we can apply
the simpler scheme, explained in section 7.1.2.
1
The term “representative grey value” stands for maximum grey value, fitted grey value or integrated grey value. See
section 7.2.2
75
7 Image Processing
linear fit: slope
linear fit: residual
A
A
C
D B
21
22
23
24
2
A: m=22.8598; res=7.9226
6
8
B: m=21.4339; res=5.8783
gray value
150
150
100
100
50
50
0
2
4
6
light intensity
8
0
10
0
C: m=24.2722; res=6.9344
2
4
6
light intensity
8
10
D: m=20.6243; res=3.4895
gray value
200
gray value
200
150
150
100
100
50
0
4
200
gray value
200
0
C
D B
50
0
2
4
6
light intensity
8
10
0
0
2
4
6
light intensity
8
10
Figure 7.1: Results of the investigation of the Camrecord 2000-camera sensor. Ten images of the Lambertian calibration standard were taken, gradually expanding the duty cycles of the illuminating LEDs.
Top, left: Map of the slopes of the fits to the grey values. Note, that here the effects of variations in gain
and illumination inhomogeneity are mixed. Top, right: Map of the residua of the fits. Impressively the
irregularity among the sensor segments is demonstrated. The borders of the segments exhibit completely
different slopes and residua in turn. Bottom: Samples of data points taken inside the sensor elements (A
and B) and at the borders of the sensor elements (C and D).
76
7.1 Preprocessing
before
after
200
150
100
50
Figure 7.2: Right: Result of calibration. The image quality has been improved considerably.
There are two peculiarities of the Photonetics’ frame transfer camera sensor, which have to be
corrected by means of a radiometric camera calibration:
Gain and noise The sensor is divided into 16 individually controlled segments. Because each
segment is adressed by its own amplifier, each segment has its own gain (which can be seen
in Fig. 7.1, top, left). The noise, called “electronic noise”, can be expressed as the variance
of the grey values in the dark image σ0 (i, j) and is composed of readout noise, amplifier
noise, thermal noise, dark current and quantisation noise. Each segment also has its own
noise characteristic, see Fig. 6.15, left.
Linearity Ideal linear camera sensors transform the incoming number of photons (called the
“intensity”) linearly into a charge, and further linearly into a voltage (called the “grey value”).
Real cameras, so the Camrecord 2000, differ from this behaviour, which becomes evident
from looking at Fig. 7.1, top, right.
The response of the camera chip to light of varying intensity was investigated. In order to do
this, a Lambertian calibration standard was imaged. Starting with a dark image, the duty cycle of
the illuminating LEDs was gradually expanded and an image sequence was taken in each case.
In order to suppress noise, the image sequences were averaged, yielding a floating point image
for a specific duty cycle. So the dependency of the grey value was obtained from the intensity
at each pixel gI (i, j), where the intensities are not continuous, but sampled at ten points only.
By applying a linear ordinary least squares fit to the measured data, we obtain a measure for the
gain (together with the effect of inhomogeneous illumination), which is the slope of the line of
best fit. The goodness of fit, which can be expressed by the residuum, denotes the deviation from
linearity. Figure 7.1 shows the result of this procedure.
The approach for a combined radiometric calibration and illumination correction is as follows:
By acquiring a sequence of reference images, we have the grey value–intensity dependency at
distinct intensities. The goal is now to obtain the true intensity, corresponding to every grey
value, which may differ from pixel to pixel. The algorithm consists of two steps: First, the gaps
between the sampled intensities are filled continuously by piecewise linear interpolation. Then
77
7 Image Processing
gray value
dark image
gray value
150
150
100
100
50
50
0
0
2
4
6
8
0
reference image
200
200
light intensity
0
2
4
light intensity
6
8
Figure 7.3: Dependency grey value–intensity g(I) in one selected pixel. Left: Multi-point calibration.
Right: Two-point calibration.
the dependency is inverted, so, that it yields the true intensity, given the grey value.
gI (i, j)
interpolation
−→
inversion
g(I; i, j) −→ I(g; i, j)
We end up with a data cube of dimensions 256 × nx × ny , where nx and ny are the numbers of
pixels in both directions. This kind of “brute-force” calibration may have the disadvantage of high
memory-usage, especially using high-resolution sensors. We note, that using spline interpolation,
we could eventually improve the efficiency of this approach.
7.1.2 Illumination Correction
As already mentioned, the Basler A641f camera, which is used in the convection tank experiments, contains a high quality (low temporal noise, no fixed pattern noise, good linearity) sensor.
Thus the images, taken by this camera, need not to be corrected radiometrically, but only an illumination correction has to be performed. In order to do this, we a two-point-correction was
chosen.
Inhomogeneous illumination affects multiplicatively the imaged objects. So we have to divide
by a reference image, if we want to correct the images with respect to illumination. We have two
opportunities to acquire a reference image:
1. A reference image can be recorded using a Lambertian calibration standard from which can
be assured, that light is reflected homogeneously and isotropically everywhere in the field of
interest.
2. We assume, that our recorded objects, the particles, have - statistically averaged - Lambertian reflection properties. Now we can construct an artificial reference image by calculating
an averaged image from the sequence. Note, that this method only works, if we have no
background in the image.
Because we have a non-zero dark image, we have to subtract it from both, the image to be
corrected, and the reference image. One can write down following formula, in order to get the
78
7.1 Preprocessing
Figure 7.4: Background subtraction. Left: Sample image from sequence. Center: Minimum image (i. e.
background). Right: Subtracted image.
corrected image g ′ (i, j) from the original image g(i, j):
g ′ (i, j) = hr(i, j)i
g(i, j) − d(i, j)
.
B(r(i, j)) − d(i, j)
Here r(i, j) is the reference image and d(i, j) is the dark-image. Because the reference image
may have a inhomogeneous small-scale structure, it is smoothened with a Gaussian B of high
order. To normalise the resulting image, it is multiplied with a constant, which is chosen to equal
the spatially averaged reference image.
Note that this action is just a special case to the more general procedure, described in the
previous section: The function g(I, i, j) displayed in Fig. 7.3, left, now is reduced to a straight
line, with its endings being grey value in the dark-image and grey value in the reference image,
see Fig. 7.3, right.
7.1.3 Background Correction
The recorded image sequences always exhibit some kind of background, which interferes with
the feature extraction. In contrast to inhomogeneous illumination, the background does not affect
the objects of interest multiplicatively. The background contributes rather additively to the overall
intensity, so that we have to subtract the background.
There are two ways to obtain a background image:
1. We can record an image, where no particles are added to the fluid, but with the same conditions as in the experiment (same concentration of the absorbing fluid, same position, same
water gauge). Sometimes this is not possible.
2. Assuming, that all particles of interest are moving at least about their diameter while the
duration of the sequence, the minimum image should equal the background image. One gets
the minimum image, if one sorts the grey values of the sequence pixel for pixel, and take the
lowest grey value at each pixel:
gmin (i, j) = min(g1 (i, j), g2 (i, j), . . . , gN (i, j)),
79
7 Image Processing
where gk (i, j) denote the grey value at position (i, j) in frame k of an image sequence of
length N .
7.2 Segmentation
Segmentation in our context means the separation of the objects of interest, the tracer particles,
from the background and from each other. Segmentation can be regarded as one necessary step
to the extraction of features like center of gravity or representative grey value of a particle. Input
object of the segmentation process is an eight-bit image, which has undergone the preprocessing,
described in section 7.1. Output object will be a binary image, in which the object pixels are
marked with the value one and the background pixels are marked with the value zero. Starting
from this binary image, the particles can be labeled and image features can be extracted.
According to [Jähne, 2002] there are four different approaches for segmentation:
Pixel-based methods This simplest segmentation method is based on grey value thresholding. Pixels with a grey value greater than a certain threshold are assigned to the particle class,
otherwise they are assigned to the background class. We will introduce a threshold operator
T[t1 ,t2 ] , which acts on the grey-valued image and yields a binary image:
T[t1 ,t2 ] (g(i, j)) =
(
1 : t1 < g(i, j) < t2
0 : otherwise
A lower respectively upper threshold [t1 , t2 ] can be determined by the user arbitrarily, or it
can be set by the machine based on a priori knowledge, like camera noise, or considering the
bimodality of the histogram. One problem of grey value thresholding is, that depending on
the choice of the threshold, objects may appear more or less large.
Edge-based methods Edges can be detected by first- or second-order derivative operators
like the Sobel- or Laplace-filter. Again, we have to apply a threshold to select the edges.
Then we have to connect the edges of an object in such a way, that they are unique and
closed. One can imagine, that such a contour-tracking algorithm may be very sophisticated,
especially, when we have the case, that objects overlap each other. Edges define the borders
of an object uniquely, so that we don’t suffer from ambiguities concerning the object-size,
like using grey value thresholding.
Region-based methods These methods use a priori knowledge of the object in such a way,
that we demand connectivity of an object. Here connectivity means, that the decision, if one
pixel belongs to the object class or to the background class depends on the classification of its
neighbours. Section 7.2.2 describes the region-growing algorithm, which is a region-based
method containing model-based elements in our implementation.
Model-based methods Here a priori information about the shape of an object is given. For
example, we assume, that particles have to look like Gaussians. Because no model fits perfectly to reality, we have to decide on the basis of the residual, a number, which describes the
goodness of fit, whether the candidate belongs to the object or to the background class. We
will investigate Gaussian fitting further in section 7.2.3.
80
7.2 Segmentation
t=0
t=1
t=2
t=3
2
2
1
1
3
1
2
3
Figure 7.5: Watershed transformation: Flooding process at different time instants. At time t = 1 the
lowest basins, marked with the level curve 1, are filled. At time t = 2 the water rises to level curve 2. To
prevent the lakes stemming from two basins from merging, a dam is built. At time t = 3 rising of the water
continues and more dams are built.
7.2.1 The Watershed Transformation
In both employed segmentation approaches, the model-based method and the region-based
method, we make use of the watershed transformation, a very valuable tool for segmentation,
which has its origins in morphological image processing. So in our application, the watershed
transformation is no segmentation procedure on its own, but forms a constituent to both of the
selected segmentation techniques, which are region growing (section 7.2.2) and fit of gaussians
(section 7.2.3).
We consider an image g as a topographic surface and define the “catchment basins” and the
“watershed lines” by means of a flooding process. Imagine, that each minimum of this surface
is filled up with a constant vertical speed, so that “lakes” arise. During the process of flooding,
two or more lakes coming from different minima may merge at a particular line. By building a
“dam” on this line, we prevent the lakes from merging. These dams define the watersheds of the
topographic surface, respectively of the image.
Figure 7.5 shows the flooding process at four different time instants t = {0, 1, 2, 3}. The grey
value image is displayed in level curve representation.
The constant vertical speed is chosen in such a way, that every time step the water level rises
about the height interval between two adjacent level curves. At t = 1 the regional minima at the
lowest level are filled. By further rising the water level, we have to build a dam between two of
the minima. This dam is the curve, which has exactly the same distance to the adjacent minima.
With increasing water level, more and more dams are built.
We can formulate this process mathematically. Here Xt denotes the area, which is already
flooded at a time instance t.
1. X1 = T1 (g)
2. Xt+1 = Rt+1 (g) ∪ ITt+1 (g) (Xt )
for 1 ≤ t ≤ N
Here Rt (g) are the regional minima of g at level t, and IX (Y ) are the influence zones of the
various sets X within the set Y . Tt is a threshold operator, which yields a binary image consisting
of pixels with grey value t.
Finally, the set of the watersheds is defined as the complementary set to XN , where N is the
top level of the topographic surface g. Before applying the watershed transformation to a particle
image, it has to be inverted. Results of the watershed transformation are displayed in Fig. 7.9,
center.
81
7 Image Processing
12
10
12
5
6
4
gmax
10
8
8
10
∆g
6
∆h=max(gmin,o, gmin,u) + 0.5∆g
4
15
∆h
gmin,o
∆b=width at height ∆h
2
gmin,u
2
0
20
0
∆g=gmax - max(gmin,o, gmin,u)
5
5
10
15
20
10
15
20
∆b
Figure 7.6: Estimation of the width of a grey value peak ∆b and of the grey value contrast ∆g: ∆b is the
difference between the maximum grey value (gmax ), and the brightest of the adjacent minimal grey values
of the particle (max(gmin,o , gmin,u )). ∆b is the width of the particle at a height ∆h, where ∆h is defined
as the sum of max(gmin,o , gmin,u ) and 1/2∆g.
7.2.2 Region Growing
The watershed transformation exhibits the disadvantage, that images tend to be oversegmented,
resulting in many false positives. This occurs mainly because of the presence of noise in the
image, which introduces many additional local minima in the topographic surface g. Isotropic
presmoothing of the image may reduce the noise, but also it will blur the dams, especially between
adjacent particles.
Region growing, in the form presented here introduced by [Hering et al., 1995a], is a somehow
more heuristic approach to satisfy the characteristics of particle images:
1. We compute the regional maxima of the image using an 8-connected neighbourhood. I. e. all
pixels are selected, which appear maximal in a 3 × 3-shifted window. These pixels may serve
as candidates for the regions.
2. Using the minima in the neighbourhood of the candidate, a measure for the width of the grey
value peaks ∆b and for the contrast of the grey value ∆g can be estimated (see Fig. 7.6).
A candidate becomes a seed, if ∆b is in a certain range, and if ∆g exceeds a predefined
threshold, which may depend on the noise of the image.
3. The seeds serve as markers for the watershed transformation ([Beucher, 1991]): Only the
basins become particles, which contain a seed, otherwise they are discarded.
Once we have segmented the particles from the background and from each other, it is easy to
extract relevant parameters, like
• Representative grey value of a particle like
– maximum grey value gm ,
– fitted grey value gf (see section 7.2.3) or
P
– integral grey value gs = i∈P gi , where P denotes the pixels in the segmented particle.
P
• Area of the particle i∈P 1.
• Position of the particles, as a result from the fit (µ̂x , µ̂y ), or by calculating the particle’s
82
7.2 Segmentation
Figure 7.7: Results of the segmentation process: Every circle indicates a segmented particle. Note, that
our algorithm is capable of separating clustered particles
centroid:
(xc , yc ) = P
1
i∈P
gi
Ã
X
gi xi ,
i∈P
• Second order moments of the particle:
X
X
µ20 =
(xi − xc )2 gi , µ02 =
(yi − yc )2 gi ,
i∈P
i∈P
X
!T
.
X
(xi − xc )(yi − yc )gi .
gi yi
i∈P
µ11 =
i∈P
The second order moments correspond to the inertia tensor in mechanics and can be consulted
in order to get a measure for the eccentricity of an object.
7.2.3 Fit of Gaussians
Tracer particles always exhibit the same shape on the image: small, circular blobs with the maximum grey value in the middle and decreasing intensity to the border. We can benefit from this
characteristic by developing a physically motivated model, which has to be fitted to the imaged
particles. A similar fit function is applied by [Leue et al., 1996] to segment particle streaks.
According to Babinet’s principle, the diffraction of light at a spherical disk exhibits the same
pattern on a screen as the diffraction at a spherical aperture, which is the Airy function (assuming
Fraunhofer diffraction):
µ
¶
2J1 (kaρ) 2
I(ρ, φ) =
I0 ,
kaρ
where k is the wave number of the light, a is the disk’s radius and J1 denotes Bessel’s function
of first order (for a plot see Fig. 7.8, red curve). Because of its isotropicity, the two-dimensional
Airy function only depends on the radial coordinate ρ, and not on the azimuth φ: I(ρ, φ) ≡ I(ρ).
83
7 Image Processing
relative intensity
1
slit function (monochromatic)
gaussian
slit function (polychromatic)
0.8
0.6
0.4
0.2
0
0
2
4
6
distance from center
8
10
Figure 7.8: Comparison of the one-dimensional Airy-function (also known as slit-function), using a single
wavenumber (monochromatic), a spectrum of wavenumbers (polychromatic), and its Gaussian approximation (from [Leue et al., 1996]).
Because the diffraction pattern of polychromatic light is a superposition of Airy functions
(Fig. 7.8, blue curve), the secondary maxima of the Airy function become irrelevant. A Taylor
expansion of ln I(ρ) yields:
ln I(ρ) = ln(I0 ) −
µ
kaρ
2
¶2
+ O(ρ4 ),
which legitimates a Gaussian (Fig. 7.8, green curve), as a good approximation of the diffraction
pattern of a tracer particle:
µ
¶
1 ³ ρ ´2
Iσ (ρ) = I0 exp −
,
2 σ
√
with σ = 2/(ka) being the standard deviation of the Gaussian.
Using Cartesian coordinates and after setting the particle’s origin to an arbitrary location
(µx , µy ) the Gaussian becomes the final fit function, which reads
¶
µ
¢
c
1 ¡
2
2
.
(7.1)
gµx ,µy ,σ (x, y) = √
exp − 2 (x − µx ) + (y − µy )
2σ
2πσ
The segmentation process consists of following steps (see Fig. 7.9):
1. A binary threshold operator T[tmin ,255] is applied to the grey value image. A low threshold tmin
is chosen, to catch as many particles as possible.
2. To identify and separate overlapping particles, the watershed operator is applied to the grey
value image. The binary output of the watershed operator is constructed in the way, that one
correspondends to basins, and zero correspondends to watersheds. It is multiplied with the
result of the threshold operator. The result itself is multiplied with the original grey value
image, which serves as input for the optimisation.
84
7.3 Velocity Estimation
Figure 7.9: Segmentation by using the model-based Gaussian fit method. Left: Binary image after grey
value thresholding. Center: We applied a watershed transformation to the original eight-bit image and
multiplied the watershed image with the binary image. Objects divided by watersheds are signalised by the
yellow arrows. Right: After applying Gaussian fits to the presegmented objects, we grouped the particle
candidates belonging to the particle class (green), or not-particle class (red) by means of the residual of
the fit.
3. The data is fitted to the model function Eq. (7.1). We choose a support of 3 × 3, 5 × 5 or
7 × 7 pixel2 about the brightest pixel of each presegmented particle. Using the coordinates
of the brightest pixel and σ = 1 as starting parameters, the model function is optimised
according to the data. As optimisation routine, we apply the nonlinear Levenberg-Marquardtfitting algorithm, described in [Press et al., 1992]. c is set to the integral of the presegmented
particle, and is held constant during the optimization:
µ̂x , µ̂y , σ̂ = min gµx ,µy ,σ (x, y).
µx ,µy ,σ
Besides of the optimised center of the particle (µ̂x , µ̂y ) and a measure for the width of the particle σ, the optimisation routine yields the residual. We use this criterion to distinguish between
particle-class (whose shape is assumed to be approximately Gaussian) and non-particle-class.
We also can estimate a maximum grey value of the particle from the fit parameters by evaluating g(µ̂x , µ̂y , σ̂) ≡ gf . However, the maximum grey value, estimated with this method, seems
to be unstable: We fitted a Gaussian to one and the same particle, varying the support from 3 × 3
up to 7 × 7 pixel2 . While the optimised center (µ̂x , µ̂y ) does not change a lot there are huge
discrepancies in the evaluated maximum grey value. [Marxen et al., 2000] proposes, that – for
center estimating – a support of 3 × 3 pixels would be enough. Addressing the estimation of a
maximum grey value, this is definitely not correct, as illustrated in Fig. 7.10.
7.3 Velocity Estimation
In the presented technique, correlation of particle patterns (like in PIV) is not feasible for velocity
estimation, because the image sequences typically consist of many layers of particles, each one
moving with its own speed. Besides that, particles may move in direction orthogonal to the image
plane, commonly referred to as “out-of-plane motion”. We make use of an extended optical-flow
based approach, in order to obtain the motion of individual particles.
85
7 Image Processing
Figure 7.10: Influence of the support on the the fit-results. Left: Support is 3 × 3 pixels2 , maximum grey
value is gf = 32.09. Center: Support is 5 × 5 pixels2 , gf = 60.35. Right: Support is 7 × 7 pixels2 ,
gf = 59.13.
In order to estimate a particle’s velocity here two cases are considered. First we assume that the
suspended particles move parallel to the wall, so that z won’t change. The grey value then remains
constant for all times, and we can apply the brightness change constraint equation (BCCE) to
obtain the optical flow:
dg
= (∇g)T f + gt = 0 .
dt
The optical flow represents the components of the particle’s velocity parallel to the wall: f =
(u, v)T .
Secondly if the particles do not move parallel to the wall, i. e. with z not constant, the grey
values will change with time in a Lagrangian frame of reference (see section 2.1.1), according to:
dg
1 dz
1 dz
w
= −g0
exp(−z/z∗ ) = −
g = − g,
dt
z∗ dt
z∗ dt
z∗
(7.2)
We find, that, identifying w/z∗ with the relaxation constant κ, the brightness change in this special case can be modeled by an exponential decay (see Eq. (4.28)). Note that we expressed the
temporal change of the z-coordinate by the out-of-plane velocity-component w. We reformulate Eq. (7.2) in such a way, that it resembles a three dimensional brightness change constraint
equation:
˜ T u + gt = 0,
ugx + vgy + wg/z∗ + gt = (∇g)
(7.3)
˜ = (∂/∂x, ∂/∂y, 1/z∗ )T is an augmented gradient and u = (u, v, w)T is the threewhere ∇
component flow vector to be estimated. gx means the partial derivative of the grey value g with
respect to x.
This can be written as a product of data matrix D and parameter vector p, which expresses
The ith row of the data matrix
some kind of basic TLS-equation (see Eq. (A.1)).
D = (d1 , d2 , . . . , dn ) contains the components:
di = (gxi , gyi , gi , gti /z∗ ).
86
7.3 Velocity Estimation
We construct the structure tensor by applying the left- (see A.3) and right- (see A.4) handed
weighting matrices (W L and W R ) to D and computing the product with its transpose:
J
= (W L DW R )T (W L DW R )

hgx , gx i11
hgx , gy i12
hgx , gi13 /z∗
hgx , gt i14
 hg , g i
hgy , gy i22
hgy , gi23 /z∗
hgy , gt i24

x y 12
= 
 hgx , gi13 /z∗ hgy , gi23 /z∗ hg, gi33 /z∗2 hg, gt i34 /z∗
hgx , gt i14
hgy , gt i24
hg, gt i34 /z∗
hgt , gt i44
using the abbreviation
ha, biij =
Z
∞
−∞
w(x − x′ , t − t′ )a(x′ , t′ )b(x′ , t′ )/(σi σj )dx′ dt′ .



,

(7.4)
Note, that here we included the multiplications with the entries of the right handed matrix W R =
diag(1/σi ) for the purpose of equilibration, as stated in A.4. The reason for this is, that in
general the noise of the derivatives of the grey values is different from the noise of the grey
values themselves.
After constructing the structure tensor and performing Eigenvalue analysis (according to section 4.2.3), the parameter vector is given by the Eigenvector r 4 belonging to the smallest Eigenvalue λ4 :
1
p = (u, v, w, 1) ≡
(r41 , r42 , r43 ).
(7.5)
r44
Besides of the parameter vector containing the sought velocities, the presented technique yields
confidence measures characterising the structure of the local spatio-temporal neighbourhood,
which was used as an input to the structure tensor. There are various types of confidence measures, which can be applied. For a detailed overview see [Jähne, 2002; Spies, 2001]. For our
purpose we use following two measures:
Measure of certainity As a measure of the overall changes of the grey values we select the
mean square of the spatial gradient, which is the upper left 2 × 2 part of the structure tensor,
derived in this section. According to [Jähne, 2002] this measure is invariant with respect of
the determined velocity:
cc = hgx , gx i11 + hgy , gy i22 .
Total coherence measure An additional measure enquires, whether our model fits to the
assumption, that there a local neighbourhood with constant velocity is present. Because for
such a motion we require the third Eigenvalue to be approximately zero, a normalised total
coherence measure can be defined by
µ
¶
λ1 − λ 4
ct =
.
λ1 + λ4
Due to the structure of the particles (which are circular and of width of only a few pixels), it is
guaranteed, that we do not suffer from the aperture problem, mentioned in 4.1. Thus we do not
need a coherence measure, which quantifies spatially oriented motions (i. e. the presence of an
aperture problem), and we can rely on Eq. (7.5) to be the correct solution for the flow. On the
other hand, there may be a motion discontinuity in our neighbourhood (stemming from particle
overlappings), which can be detected by applying the total coherence measure.
87
7 Image Processing
7.4 3D-Position Estimation
The depth of the particles can be reconstructed according to Eq. (5.4) for the method of one
wavelength
z = z∗ (ln g0 − ln g),
or according to Eq. (5.5) for the method of two wavelengths
µ µ ¶
µ
¶¶
g1
g02
z∗1 z∗2
ln
+ ln
.
z(g1 , g2 ) =
z∗1 − z∗2
g2
g01
Here g, g1 and g2 are the representative grey values of the particle, which can be the maximum
grey value, the maximum of a Gaussian fit, or the integral grey value (see 7.2.2).
7.5 Correspondence Analysis and Tracking
From the image sequence we obtain a dense 2D3C velocity vector field (u(x, y), v(x, y), w(x, y)),
but the reality is in such a way, that the motion vector field is 3D3C (u(x, y, z), v(x, y, z),
w(x, y, z)). Thus, we have to assign the motions to the particles in various depths.
This procedure ends in a list, where each particle is identified by a number and is attributed by
center of mass, representative grey value and velocity. In the method of one wavelength we can
deduce from the representative grey value to the true depth by means of calibration. This is not
possible applying the method of two wavelengths. Here we have to find the correspondence of one
and the same particle in two temporally neighboured frames. Our approach for this non-trivial
task is explained in the next section.
7.5.1 Finding Correspondences
The simplest approach for finding correspondences is the nearest neighbour method, which assigns a particle i in the first frame to the particle j in the second frame, whose Euclidian distance
q
dij = (xi − xj )2 + (yi − yj )2 , i = 1 . . . N1 , j = 1 . . . N2
is minimised, where N1 and N2 are the numbers of particles in the first, and in the second
frame respectively. In our simple implementation the complexity is of O(N1 N2 ), (i. e. roughly
quadratic in the mean number of particles) because for every particle in the first frame the distance to every particle in the second frame has to be calculated in order to find the particle pairs
with minimal d.
The nearest neighbour method works well in the case of the displacements from one frame to
the next being small compared to the mean particle distance. Nevertheless, because in general the
particles are distributed statistically, there may be misassignments: It may happen, that a particle
in the second frame does not find any partner (1:0-correspondence), or that it finds more than one
partners (1:n-correspondence). To solve this problem, we discard all 1:0- or 1:n-correspondences,
so that only 1:1-correspondences survive.
By reducing the average distance between the corresponding particles in the first and second
frame, the number of the 1:1-correspondences relative to the misassignments could be enlarged.
88
7.5 Correspondence Analysis and Tracking
original image pair
image pair after backwarping operation
t=t1
t=t2>t1
backwarping operation
Figure 7.11: Schematic illustration of the benefits of warping. Left: Unwarped image pair. The nearest
neighbour approach yields misalignments and 1:0- or 1:n-correspondences. Right: The second image
is warped towards the first image according to the plotted displacement vectors, which are obtained by
applying the optical-flow based method.
In order to do so, prior knowledge about the displacements from one frame to the next could be
used, given by the optical-flow based velocity analysis.
Therefore, the particle positions in the second frame (xj , yj ) are warped towards the positions
in the first frame (xi , yi ) by subtracting the displacements (uj (xj , yj ), vj (xj , yj )) from (xj , yj ):
(x′j , yj′ ) = (xj , yj ) − (uj , vj ).
Now the nearest neighbour method is applied, assigning a particle i in the first frame to the
particle j in the warped second frame by minimising the Euclidean distance
q
′
dij = (xi − x′j )2 + (yi − yj′ )2
between these particles. Additionally we impose an upper threshold for d′ to inhibit unlikely
1:1-correspondences.
7.5.2 Particle Tracking
Correspondence analysis can be used for two types of tasks:
Depth estimation As already mentioned in section 7.3, using the method of two wavelengths,
particles in the first frame (which can be the image, recorded with a wavelength of 455 nm,
and which will be denoted as a-frame) have to be assigned to the corresponding particles in
the next frame (for instance recorded with 470 nm-wavelength, and called the b-frame further
on). Because the velocity vector field is determined using the same type of frames (e. g. aframes), we have to subtract only half of the displacements in order to warp back the particle
positions (see Fig. 7.13, green).
89
7 Image Processing
frame i
frame i+1
60
60
70
70
80
80
90
330
340
350
360
frame i
370
90
330
frame i+1
frame i (warped)
340
350
360
370
60
70
80
90
330
340
350
360
370
Figure 7.12: Warping applied to real data. Top left: First frame, with centroids of the particles marked
blue. Top right: Second frame, with centroids of the particles marked green. Bottom: The centroids of the
first frame (blue) are warped towards the centroids of the second frame (green), yielding the red markers.
Particle tracking Particle tracking is applied for several reasons: First, by extracting a path
line of a moving fluid element (which is represented by a particle in good approximation), we
get a Lagrangian representation of the flow. Secondly, we can use the trajectories for postprocessing: For example, we can connect two adjacent trajectories, which may be interrupted by
missing frames, using the assumption of continuity of motion, or we can smooth the trajectories. Here a particle in the first frame (a-frame) is assigned to a particle in the next frame but
one (again a-frame). Now we have to subtract the full displacements for warping back (see
Fig. 7.13, blue).
We note, that a similar approach is made by [Cowen and Monismith, 1997], where information
gained by previous cross-correlation was used to obtain a better performance for particle tracking.
7.6 Postprocessing
Once estimated the positions and velocities of the tracer particles, we have acquired an irregularly spaced 3D3C motion vector field. For further analysis we make use of the Lagrangian
representation of the flow and of the continuity of the flow field.
90
7.6 Postprocessing
a
Opt. Flow
Opt. Flow
Opt. Flow
Velocity
aa-corr.
aa-corr.
aa-corr.
Trajectories
b
ab-corr.
a
b
ab-corr.
a
b
ab-corr.
image
sequence
Depth
Figure 7.13: Flow-diagram illustrating the extraction of velocity, depth and trajectories using optical flow
and correspondence analysis. In this example, the optical flow (orange) is calculated from the a-frames.
The depth (green) is computed from the representative grey values of corresponding particles in both a- and
b-frames using the optical flow. The trajectories (blue) are obtained by establishing the correspondences
between particles in the a-frames with the help of optical flow.
7.6.1 Elongation of Trajectories
In the analysis, so far, we acquired information about the depth of a particle by converting grey
value into position by means of Beer-Lambert’s law. In the introduction of chapter 5 we have
already adressed the fact, that this procedure yields only a depth map, i. e., that occluded objects
become not visible. In our image sequences, occlusions occur everywhere where particles overlap
each other. With increasing particle density overlappings are rather the rule than the exception.
Detection of overlappings We have investigated two methods of detection of overlappings.
The first one uses the coherence measure, we get by the motion analysis (see section 7.3).
Particle overlappings are closely related to discontinuities of motion occuring in the spatiotemporal window, from which the structure tensor is calculated. The second one uses the fact,
that in order to do the particle tracking, 1:1-correspondences are required. If two particles
overlap in the second frame, one particle in the first frame cannot be assigned to one single
particle in the second frame uniquely. Therefore, the trajectory has reached its end.
Finding trajectory candidates Possible candidates for the elongation of a trajectory T0 with
end-particle in frame t are all trajectories Ti , i = 1 . . . N with start-particles between t and
t + ∆t, where ∆t is a value for the maximum allowed length of time for a particle-crossing.
Trajectory matching We apply N ordinary least squares fits to the trajectory-pairs {T, Ti },
regarding the variables x, y and z. The residuals of the fits are measures, how good the
trajectories match in the individual variables. By combining these measures to a score, one
can give evidence if two trajectories belong to the same particle, or not.
Filling of the gap By adopting the fitted values in the temporal gap between a trajectory pair,
two trajectories are merged to a new, long trajectory.
Figure 7.14 shows the process of trajectory elongation. For the sake of simplicity, the ycoordinate has been dropped using 1D sections of the frames. In the x − t-image, the spatiotemporal locations of the superpositions become clearly visible. By multiplying the original
images with a mask generated from the coherence measure, the positions, where overlappings
91
7 Image Processing
original image
coherency mask
t=10
t=20
t=30
t=40
t
20
20
40
40
60
t
60
t=50
80
80
t=60
100
100
t=70
120
120
20
t=80
60
80
20
40
60
80
x
x
80
1
60
0.5
x 40
z
0
20
0
40
0
20
40
60
t
80
100 120
-0.5
0
20
40
60
t
80
100 120
Figure 7.14: Simple example, demonstrating our procedure to cope with overlappings and to elongate trajectories. Top, left: Some frames of the considered image sequence. A bright and fast particle “overtakes”
two slow and dark particles. A horizontal line through the center of the particles was picked as a basis for
a spatio-temporal x − t-image (top, center). Then a confidence measure from velocity analysis (here the
trace of the upper left 2 × 2-part of the structure tensor) was used to exclude non-constant motion (top,
right). Bottom: The sub-trajectories were matched both in x − t- and in z − t-space. From the residuals
of the fits, a score was calculated.
occur, can be excluded in the further analysis. Figure 7.14, bottom, shows the fits to the particles’
x- and z- coordinates. By combining multiple features in a score, a greater stability of the fit can
be reached.
Major disadvantage of this method is, that it imposes some thresholds, which have to be set
based on empirical knowledge: We have to define a maximum ∆t and we have to specify, how
the different features are weighted against each other. Moreover, we have to set a limit for the
score.
7.6.2 Smoothing of Trajectories
The accuracy of the determination of the z-coordinate suffers from the fact, that there is an error
in determining the grey values due to the limited pixel size and the camera noise. This will be
investigated in more detail later in section 8.2.4. Figure 7.15, left, shows a typical example how
92
7.6 Postprocessing
1
reconstructed depth
200
gray value
150
100
50
40
0.6
0.4
0.2
cyan
blue
0
20
0.8
60
frame number
80
100
raw
smoothed
0
20
40
60
frame number
80
100
Figure 7.15: Smoothing of a trajectory. Left: Grey values of one and the same trajectory recorded with
cyan and blue light. Right: Directly calculated z-coordinate of this trajectory and smoothed version
the grey values along a trajectory might look like. Smoothing is able to eliminate this kind of
“noise” to a great extent. Therefore we applied a 1D Binomial-filter (e. g. B 8 ) to the grey valuetime series of both wavelengths. The borders of both time series can be zero-padded; this cancels
out by division. An exemplary result of smoothing is shown in Fig. 7.15, right.
7.6.3 Calculation of a Dense Vector Field
For some hydrodynamical quantities (like vorticity, shear rate ordissipation rate), we need an
equally spaced vector field. Moreover, this might be useful, when we want to compare our algorithms to other methods like classical PIV.
One has to interpolate in some manner to obtain a displacement vector in every point of an
equally spaced grid. An overview over the classical methods is given in [Lancaster and Salkauskas,
1986]. A common technique, which is applied in fluid flow analysis, is the adaptive Gaussian
windowing method (AGW) presented in [Agüi and Jimenez, 1987], which is a special case of
the more general normalised convolution method by [Knutsson and Westin, 1993]. The interpolated vector field v i , i = {x, y, z} at the equally spaced positions xi can be calculated from the
irregularly sampled velocities uj at positions xj by applying following procedure:
v i (xi ) =
´
³
|xi −xj |
u
(x
)c(x
)
exp
−
j
j
j=1 j
σ2
´
³
,
PN
|xi −xj |
c(x
)
exp
−
j
2
j=1
σ
PN
where c(xj ) denotes the confidence of the measurement at the position xj , which is given by one
of the confidence measures obtained by motion analysis (see section 7.3). Using Monte-Carlo
simulations [Agüi and Jimenez, 1987] have shown, that the optimal width of the convolution
window, which is given by σ, is directly proportional to the mean spacing between particles δ2
resp. δ3 :
σ = αδ2 = α
r
A
πN
for the 2D, and
σ = αδ3 = α
r
3
V
πN
for the 3D case,
93
7 Image Processing
with the proportionality constant α set to 1.24. Here N is the particle number, contained in the
area A or volume V . The results of [Agüi and Jimenez, 1987] for the 2D case could be approved
by [Hering, 1996] for the 3D case.
7.7 Estimation of the Wall Shear Rate using 3D Parametric Motion Models
In this section we will generalise the well-known concept of estimating the parameters of an affine
transformation of the optical flow (see chapter 4.3.1) to the case of 3D. The knowledge of some
of the components of the parametric motion model in 3D allows us to estimate the wall shear rate
directly, without a previous calculating of the vector fields.
7.7.1 Parameterization of 3D Physical Flow Fields
In the following we consider 3D physical flow fields. We apply the notation u ≡ (u1 , u2 , u3 )T ≡
(u, v, w)T for the 3D velocity vector at the 3D position x ≡ (x1 , x2 , x3 )T ≡ (x, y, z)T . A flow
field u(x) can be extended to a first order Taylor series in the vicinity of (x0 , t0 ):
ui (xj , t) ≈ ui (xj,0 , t0 ) +
∂ui
∂ui
xj +
t.
∂xj
∂t
We made use of Einstein’s summation convention, and i and j are defined from 1 to 3. In vectormatrix-notation this reads:
u(x, t) ≈ s + Γx + bt,
where s is a 3D translation, Γ = (γij ) = (∂ui /∂xj ) is the 3 × 3-velocity gradient tensor which
is essentially the Jacobian, and b is a 3D acceleration.
This 3D parametrization can be incorporated into the 3D-BCCE Eq. (7.3):
˜ T · (s + Γx + bt) + gt = 0
(∇g)
(7.6)
From the components of the matrix Γ important physical quantities of the local neighbourhood
in the flow field can be computed, like
• the vorticity vector: ωk = ǫijk γij ,
• the strain rate tensor: sij = 1/2(γij + γji ),
• the dissipation rate: ǫ = −2νsij sij = −ν(γij + γji )γij .
For the interpretation of the velocity gradient tensor, which describes the kinematics of fluid
motion up to first order, we also refer to the section concerning the Helmholtz theorem (section 2.1.2).
If we assume that we have pure 2D-flow u = u(x, y), v = v(x, y) and w = 0, Eq. (7.6)
reduces to the optical flow-parametrization case (4.20).
Like in the case of 2D optical flow fields one can generalise the parametrization of 3D physical
flow fields in the context of the Lie group of continuous transformations (see section B.2).
94
7.7 Estimation of the Wall Shear Rate using 3D Parametric Motion Models
γ 13
γ 23
z
1
0.5
0.55
0.6
0.65
0.7
0.1
0.05
0
0.05
u
0.1
0.6
γ 13
γ 23
z
1
0
0.5
1
1.5
2
1
0.5
0
0.5
1
1
u
Figure 7.16: Example images of the synthetic image sequences (left), maps of the wall shear rates estimated with our algorithm (center) and velocity profiles, which deliver the ground truth for the wall shear
rates (right). The distinct image sequences are described in the text.
7.7.2 Estimation of the Wall Shear Rate
In the previous section the parameter estimation of 2D image sequences underlying a 3D affine
transformation was discussed. Now we will return to our applications: Besides knowing the
velocity fields explicitly, it might be useful to know derived quantities, like vorticity, strain rate
or dissipation rate. In particular, the wall shear rate is vitally important to get knowledge about
the flow behaviour – e. g. to locate critical points of the flow.
In the following the case of uniform wall parallel shear flow is addressed, i. e. u = u(z),
v = v(z) and w = 0. The only non-vanishing components of the velocity gradient tensor
γij = ∂ui /∂xj are
∂u1
∂v
∂u2
∂u
=
= γ13 and
=
= γ23 .
∂z
∂x3
∂z
∂x3
Therefore the 3D-EBCCE (4.20) can be rewritten to
  



x
0 0 γ13
  


e T ·
∇g
(7.7)
0 +  0 0 γ23  ·  y  + 0 + gt = 0,
0 0 0
z
which can be transformed to a scalar product of the data vector d and the parameter vector p after
some simple algebraic manupilations:
d · pT = (gx z, gy z, gt ) · (γ13 , γ23 , 1)T = 0 .
95
7 Image Processing
Starting from this scalar product, one can construct an expanded structure tensor, similar to the
way presented in section 7.3:


hgx z, gx zi11 hgx z, gy zi12 hgx z, gt i13


J = (W L DW R )T (W L DW R ) =  hgx z, gy zi12 hgy z, gy zi22 hgy z, gt i23  .
hgx z, gt i13
hgy z, gt i23
hgt , gt i33
using the abbreviation Eq. (7.4). Again here we used equilibration, in order to weight the entries
of the matrix D relative to their noise.
By performing an Eigenvalue decomposition we obtain an estimation for the parameter vector
to
1
p ≡ (γ13 , γ23 ) =
(r31 , r32 ),
r33
where r 3 is the Eigenvector belonging to the smallest Eigenvalue λ3 .
7.7.3 Application to Synthetical Data
To illustrate the procedure of estimating the wall shear rate directly and to highlight some of its
limits, we applied Eq. (7.7) to synthetically generated data, where the ground truth is well known.
The following image sequences are generated by providing a uniform, wall-parallel 3D flow:
u(z) = γ13 z, v = w = 0. The flow is texturised using Gaussian intensity distributions of equal
maximum intensity and equal maximum variance, representing the particles. The z-position of
the particles is indicated by attenuating the maximum intensity of the Gaussians according to
Beer-Lambert’s law Eq. (5.4).
The first synthetic image sequence contains particles, which are distributed in such a way, that
they never will overlap each other: The particles are arranged in rows; each particle in one row
having the same depth, and therefore the same brightness and the same speed (Figure 7.16, top,
left). Here the wall shear rate γ13 is exactly 0.6, which is indicated by the profile u(z) (Fig. 7.16,
top, right), and the wall shear rate γ23 vanishes. Our algorithm yields wall shear rates, which are
displayed in Fig. 7.16, top, center. The ground truth is reproduced very well. Slight deviations
occur, where there are several particle-rows adjacent, moving with approximately the same speed.
The reason for these deviations is the fact that the spatio-temporal neighborhood is of limited size
(in this case 65 × 65 pixels).
In the second synthetic image sequence the particles are randomly distributed (Fig. 7.16, bottom, left), moving so that they follows the wall shear rates γ13 = 1 and γ23 = 0. The estimated
wall shear rates are mapped in Fig. 7.16, bottom, middle. As a result of the fact, that overlappings may occur, there are regions, where our algorithm produces significant deviations from the
ground truth. This is evidence, that our model fails in the presence of motion discontinuities,
which are caused due to overlappings of particles.
7.8 Summary of the Algorithms
In this last section of the chapter we will see, how the distinct techniques can be connected in
order to obtain the desired quantities. Here we confine ourselves to the applications presented in
chapter 8, namely the estimation of velocity vector fields in Eulerian and Lagrangian representation using the method of two wavelengths and the computation of the wall shear rate using the
96
7.8 Summary of the Algorithms
1. Preprocessing
raw data
wavelength 1
wavelength 2
illumination correction
illumination correction
vector list
background subtraction
background subtraction
segmentation
segmentation
optical flow
particle u, v, w
particle x, y
2. Segmentation
particle z
matching
calibration
depth estimation
3. Velocity, 4. Position and 5. Tracking
PTV: trajectories
4. Postprocessing
Lagrangian field
trajectory smoothing
trajectory elongation
Eulerian field
Figure 7.17: Flowchart of the algorithm to calculate the Eulerian and Lagrangian velocity fields, using
raw data generated with the method of two wavelengths. The analysis in section 8.2 and section 8.3 is
performed in this way.
method of one wavelength. However, due to its modular setup, one can adapt the algorithmics to
one’s individual needs.
Estimation of the Velocity Vector Fields
Figure 7.17 shows a view over the different components of our algorithm, and how they are
related to each other. The components are grouped in four modules, whose names refer to the
headlines of the first six sections in this chapter. In the preprocessing step the raw data is separated
into two subsequences (one for the first wavelength, e. g. 455 nm, and one for the second wavelength, e. g. 470 nm). To each of these subsequences the various procedures of preprocessing and
segmentation (section 7.1 and section 7.2) are applied. By performing correspondence-analysis
(section 7.5.1) two grey values are assigned to one and the same physical particle; thus its depth
can be calculated (section 7.4). Using one subsequence (e. g. of 470 nm wavelength), the optical
97
7 Image Processing
raw data
illumination correction
background subtraction
segmentation
calibration
optical flow using
3D-parametric
motion models
Wall-Shear-Rate
depth estimation
Figure 7.18: Flowchart of the algorithm to calculate the wall shear rate distribution using the method of
one wavelength. The analysis in section 8.4 is performed in this way.
flow can be estimated according to section 7.3. We have arrived at an unequally spaced 3D3C
Eulerian velocity vector field, which can be interpolated to a regularly spaced one, using the technique described in section 7.6.3. On the other hand, the particles can be connected to trajectories
(section 7.5.2), so that postprocessing (section 7.6.1 and section 7.6.2) can be applied.
For the quantification of the particles’ absolute depths and absolute velocities, a calibration
has to be performed. I. e. both, the penetration depths (z∗ 1, z∗ 2) and the ratio of the intensities
g01 /g02 has to be extracted using a special calibration technique. In section 8.1 the calibration
procedure and its validation are explained.
Estimation of the Wall Shear Rate
Figure 7.18 visualises the various steps, which had to be carried out until the wall shear rate is
obtained. In this thesis, the wall shear rate was calculated for image sequences, recorded with the
method of one wavelength. Thus after preprocessing and segmentation, correspondence analysis
is not necessary. Using the preprocessed sequence and a sequence of depth maps z(x, y) as input,
the wall shear rate distribution directly can be computed, without previous calculation of the
velocity vector fields (section 7.7.2).
98
8 Experimental Results
In the following some experiments applying our measurement technique and our algorithmics to
real flows and to real data will be presented. In section 8.1 the applicability of Beer-Lambert’s
law to depth estimation is demonstrated. Moreover, we will see, how our system easily can be
calibrated regarding the reconstruction of the particles’ depth. The main contribution of this work
is the “proof-of-concept” of the method of two wavelengths applied to real-world flows. A laminar
falling film (section 8.2) provides a well-known velocity profile, so that the measurement method
can be tested. Convective turbulence (section 8.3) represents a more complex flow. Finally, in
section 8.4 the algorithmics is adapted to data, which was recorded using the method of one
wavelength. Here the focus is set to the direct estimation of the wall shear rate.
8.1 Calibration of Depth-Resolution
In this section procedure and results of the calibration of depth-resolution for the method of two
wavelengths will be discussed. A possible calibration procedure for the method of one wavelength
is given in [Debaene, 2005].
In section 5.2 we have found an expression for retrieving the lateral position z of a particle,
given its apparent intensities while recording with two distinct wavelengths g1 and g2 :
z∗1 z∗2
z(g1 , g2 ) =
z∗1 − z∗2
µ µ ¶
µ
¶¶
g1
g02
ln
+ ln
.
g2
g01
By introducing the abbreviations η = ln(g1 /g2 ), zred = (z∗1 z∗2 )/(z∗1 − z∗2 ) and V =
ln(g02 /g01 ) the former can be rewritten to
z(η) = zred η + zred V.
(8.1)
8.1.1 Calibration via Linear Positioner
Equation (8.1) proposes a linear z − η dependency. We can check this, by acquiring ηi at various
depths zi , and subsequently fitting a straight line to the data. The slope turns out to be zred and
the intercept yields zred V .
The particles were fixed in a 2 mm thick layer of agarose-gel. The setting up of agarose-gel is
described in [Schlosser, 2004]. The gel with the particles was immersed into dyed fluid using a
linear positioner (see Fig. 8.1, left). By moving the table the relative change of the width of the
covering layer can be controlled. Using thin rods, water-displacement can be neglected, so that
the movement of the table equals the relative change in depth.
This procedure is carried out automatically by centrally controlling the motion of the table and
image acquisition. Images were taken at 80 measurement points, separated 125 µm, covering a
99
8 Experimental Results
reprojected depth using zred=−1.548 and V= 0.609
1.5
linear positioner
(range: 0-10mm)
10mm
estimated depth [cm]
1
0.5
0
−0.5
0
0.2
0.4
0.6
true depth [cm]
0.8
1
Figure 8.1: Calibration via linear positioner. Left: Sketch of the experimental setup. Right: Reprojection
of the data. The estimated depth using the fitted values for zred and V is plotted against the true depth.
total distance of 10 mm. Preprocessing (section 7.1) and segmentation (section 7.2) were carried out, and the correspondences between the particles acquired at 455 nm, and those acquired
maximum gray values of the N distinct
at 470 nm were established (section 7.5.2). Using the P
particles at the two wavelengths the mean of η, hηi = N
i=1 ηi /N , is calculated in dependency
of z. By applying a linear ordinary least squares-fit to the data both zred and V can be extracted.
Figure 8.1, right, shows the reprojection of the data zreproj (z) using the fit-parameters according
to Eq. (8.1). We see a broad variance in the individual data-points, which are marked blue, but the
mean values fit very well to the line zreproj = z. Note, that with increasing depth the reprojected
data becomes less exact.
8.1.2 Calibration via a Target
From the previous section we see, that the linear dependency, proposed in Eq. (8.1), is in good
agreement to the experiment. Furthermore, we have found a method to obtain the calibration
parameters. But this calibration technique has the disadvantage, that it is quite elaborate. Calibration has to be carried out at least once when using a new preparation of dyed fluid. Therefore
it would be reasonable to calibrate directly at the measurement place, using the same camera
adjustments as in the experiments.
Once linearity is guaranteed it should be sufficient to take images at two measurement points,
which are at a defined distance away from each other. Because simultaneously data from many
particles were acquired, the statistical measurement error can be reduced. The particles are
trapped in a thin cuvette, filled with water. Another cuvette of width d filled with water (or
the dyed fluid) is put between particles and camera-optics, working as an absorbing medium.
Figure 8.2, left, shows a sketch of the two-step calibration principle.
1. The first image is taken, while the cuvette is filled with water: The absorbing distance is
0 mm, so that Eq. (8.1) becomes
0 = zred (η + V ).
100
8.1 Calibration of Depth-Resolution
455nm-wavelength
1.
calibration with water in the cuvette (z=0mm)
2.
calibration with dye in the cuvette (z=d)
470nm-wavelength
d
Figure 8.2: Calibration via target. Left: Sketch of the calibration target (filled with (un)dyed fluid). Right:
Calibration using light of 455 nm and 470 nm wavelength.
In order to fulfill the equation, the second factor has to be zero, so that a necessary condition
for V is:
V = −[η]z=0 = [ln g2 − ln g1 ]z=0 .
2. The second image is taken, while the cuvette is filled with a probe of the dyed fluid used in
the experiment: The absorbing distance is d, so that Eq. (8.1) becomes
d = zred (η + V ),
which can be transformed to
zred
·
¸
d
d
=
.
=
V +η
V + ln g1 − ln g2 z=d
Figure 8.3 shows the results of this technique. The calibration parameters zred,i and Vi for
each particle are scattered statistically about their mean values. Because there might be outliers
(occurring due to swimming particles in the absorber fluid, for instance), the median (instead the
mean) of the data is calculated. Reprojection uncovers these outliers.
8.1.3 Discussion: Calibration of Depth-Resolution
Using a linear positioner, on the one hand the applicability of Beer-Lambert’s law to the present
setup can be tested, on the other hand, we have an instrument to reconstruct the calibration parameters occurring in the linear expression for retrieving the lateral position of a particle Eq. (8.1).
Analogous to radiometric camera-calibration, once being sure, that there is a linear dependency
at hand, a two-point-calibration should be sufficient. By employing a specially designed target the
calibration-procedure has become much simpler, so that the parameters can be extracted directly
using the same adjustments as in the experiment. By retrieving the calibration parameters for
O(100) individual particles simultaneously and applying the median to find the best estimate, we
can be sure, that the error on the parameters is small.
101
8 Experimental Results
d=0mm
d=1mm
455nmwavelength
470nmwavelength
reprojected
particle depth
[mm]
1
0.5
0
0.5
1
40
70
35
60
30
histogram of
computed
calibration
parameters
0
0.5
1
1.5
2
50
25
40
20
30
15
20
10
10
5
0
0.5
0
0.5
median(V)=0.03871
1
0
2
1
0
1
2
3
4
median(zred)=1.1498
Figure 8.3: Example of a calibration performed using a target. On the left hand side, results without
absorber, on the right hand side, results with dye as an absorber are shown. Top: Acquired images using
light of the two wavelengths. Center: Back-projected particles. Except some outliers, their depths scatter
about the desired values. Bottom: Histogram of the calibration parameters.
8.2 Measurements in a Falling Film
One of the most basic laminar flows, which are achievable in the laboratory, is the flow in a falling
film on an inclined plane. Flow parameters like film depth, maximum velocity and Reynolds
number can easily be varied by changing throughput, inclination angle and viscosity.
Its simplicity and versatility qualifies this flow for a physical reference our technique can be
tested at. In this section the hydrodynamics of the flow, its realisation and experiments are addressed.
102
8.2 Measurements in a Falling Film
8.2.1 Laminar Falling Film - Theory
The velocity field results from a solution of the Naviar-Stokes equations (NSE), applying a certain
geometry and assuming small Reynolds numbers.
The equations of motion for a flow of a uniform density fluid read (see Eq. (2.6))
du
= g − ∇p/ρ + ν∇2 u.
dt
We consider the fully developed stage of a two-dimensional (w = 0 everywhere), steady
(∂u/∂t = 0) flow running down an inclined plane (the angle to the horizontal is α), which is
solely driven by gravity g. Because continuity holds, i. e. ∂u/∂x + ∂v/∂y = 0, and the flow
characteristics are invariant in x-direction (i. e. ∂u/∂x = 0), we have ∂v/∂y = 0 everywhere.
Since v = 0 at y = 0, it follows that v = 0 everywhere, which reflects the fact that the flow is
parallel to the walls. Therefore the NSE reduce to one equation
0 = −g sin α + ν
∂2u
,
∂y 2
which can be integrated to
∂u
g sin α
=
y + A and
∂y
ν
u=
g sin α 2
y + Ay + B.
2ν
Because of the boundary conditions [u]y=0 = 0 (no-slip condition) and [∂u/∂y]y=b = 0 (no
shear stress at surface), we have for the integration constants
A=−
gb sin α
ν
and
B = 0,
so that the solution becomes
g sin α 2
(y − 2by).
2ν
Using a coordinate system, which has its origin on the film-surface, with the ordinate pointing
to the wall, we have to replace y by ỹ = b − y and u by −ũ, so that the velocity profile can be
written as
g sin α 2
ũ(ỹ) =
(b − ỹ 2 ),
(8.2)
2ν
which is commonly referred as the Nusselt solution. From now on, we omit the tilde in ũ and ỹ.
The velocity, averaged over the depth of the film, is
u(y) =
um
1
=
b
Z
0
b
u(y)dy =
g sin αb2
2
= u(0),
3ν
3
so that we get for the film depth in dependency from the exterior parameters, which are throughput
Q, width of the film d, inclination angle α and viscosity of the fluid ν:
s
Q
3Qν
Q = bdum ⇒ b =
= 3
.
dum
dg sin α
103
8 Experimental Results
d
Q
u(z)
α
ν=1.00mm²/s (0% gly.)
ν=10.8mm²/s (60% gly.)
ν=1.31mm²/s (10% gly.)
ν=22.5mm²/s (70% gly.)
ν=1.76mm²/s (20% gly.)
ν=60.1mm²/s (80% gly.)
ν=2.50mm²/s (30% gly.)
ν=219mm²/s (90% gly.)
ν=3.72mm²/s (40% gly.)
ν=1410mm²/s (100% gly.)
ν=6.00mm²/s (50% gly.)
0
2
10
2
10
10
100%
50%
0%
10
Reynolds number
1
10
maximum velocity [cm/s]
film thickness [cm]
1
0%
10
1
0%
50%
0
10
50%
0
10
1
10
100%
2
10
1
0
10
1
2
10
10
100%
2
10
0
10
3
1
2
10
10
3
flow rate [cm /s]
flow rate [cm /s]
10
0
10
1
2
10
10
3
flow rate [cm /s]
Figure 8.4: Diagrams, showing the film depth b, maximum velocity umax and Reynolds number textrmRe,
given the throughput Q and the viscosity ν. An inclination of α=1°and a film width of b=20 cm are
assumed.
A Reynolds number can be defined to
Re =
Q
bum
=
,
ν
νd
which does not depend on the inclination α.
Figure 8.4 shows the interior parameters film depth b, maximum velocity v(z = 0) and
Reynolds number in dependency of the exterior parameters throughput Q and viscosity ν, where
the inclination angle α and the film width b are held constant. Here a laminar flow is presumed.
According to [Braun, 1969] the flow in a falling film is laminar, as long as Re < 3. In the
range 3 < Re < 45, small disturbances because of the non-perfect geometry result in pressure
fluctuations. In the NSE an additional pressure gradient appears, which makes their solution very
complicated. [Chang and Demekhin, 2002] show, that depending on the type of perturbations
and on the Reynolds number, various kinds of waves develop, which range from periodic waves
to solitary waves.
Above Reynolds numbers of about 45 the flow in the falling film becomes turbulent, because
now the inertia forces play a significant role. The wavy flow state becomes instable, and the flow
transits to turbulence.
104
8.2 Measurements in a Falling Film
inflow
outflow
Figure 8.5: Sketch (left) and photograph (right) of the falling film tank.
8.2.2 Laminar Falling Film - Setup
A falling film tank of 230 cm length and 20 cm width was constructed (see Fig. 8.5, left). Thread
rods are used to adjust the slope of the tank α continuously from 0 to 10°.
The tank was fully made of borosilicate glass1 , which guarantees high resistivity against acids,
alkalies and organic substances. Moreover, choosing PP-H2 (homopolymer polypropylene) as
piping material, a safe usability of the tank for the above mentioned substances was assured.
The flow is driven by gravity, pouring from one container, which is located about 2 m above
the ground, to the other, residing at the ground. No active elements like pumps were used to
avoid contamination of the fluid. By choosing a sophisticated design of the piping using two ball
valves, one is able to alternate the positions of the containers without altering the positions of inand outlet of the tank (see Fig. 8.6). The throughput Q of the flow is measured using a variable
area flow meter, being located up-stream of the inlet.
Image acquisition was done using the Camrecord 2000 high-speed camera at a resolution of
512×512 pixels2 and a frame rate of 125 Hz or 250 Hz. The light sources 1 (royal blue – blue) and
3 (blue – cyan) were installed (see Fig. 6.11). In both cases, the absorption spectra of tartracine
acid yellow turned out to be well adapted to the emission spectra of the LEDs. The setup of
the recording unit was like that shown in Fig. 6.12. Figure 8.5, right, shows the recording unit
mounted on a linear positioner table, which was used to perform measurements with the camera
moving relative to the tank.
Both, the Optimage particles and the 3M glass hollow spheres were used. The 3M particles
tended to stay at the water surface, or to deposit at the bottom of the tank. This might be explainable due to their hygroscopic properties. The Optimage particles mixed very homogeneously
with the fluid. The falling film was operated with both, pure deionized water and water-glycerolmixtures. The viscosity of the mixtures was measured using a glass tube viscosimeter.
1
2
Borofloat 33 by Schott AG, Mainz, Germany; http://www.schott.com
Georg Fischer GmbH, Albershausen, Germany; http://www.piping.georgfischer.com
105
8 Experimental Results
A
A
Q
Q
in
V1
out
V2
B
in
V1
out
V2
B
Figure 8.6: Schematic drawing of the piping in the falling film tank. The flow is either from container A
to B (left), or from container B to A (right), while in- and outlet of the tank stay the same. This behaviour
is driven by the two ball valves V1 and V2 . The throughput Q is measured by a variable area flow meter
up-stream of the inlet.
8.2.3 Laminar Falling Film - Results
In the following results of the measurements in the falling film tank will be shown. The capabilities and the limits of the measurement technique and the algorithms will be demonstrated.
Generally the procedure is as follows:
1. The acquired image sequence undergoes a simultaneous radiometric calibration and illumination correction (section 7.1.1). Then the background, represented by the minimum image,
is subtracted (section 7.1.3).
2. The particles were segmented using the region growing procedure (section 7.2.2) or a fit of
Gaussians (section 7.2.3). The brightness parameter was either the maximum gray value or
the fitted gray value. The performance of these segmentation approaches is discussed later in
this section.
3. The velocity vector fields are extracted using the optical flow-based method (section 7.3).
The depth coordinate of one particle was obtained using the brightness parameters of two
subsequent images (section 7.4). Correspondences between image pairs were found by using
the warping technique explained in section 7.5.1.
4. Postprocessing was performed in several ways. Firstly particles with low gray values and thus
low expected signal-to-noise-ratio are thrown rejected. Secondly particles were connected
to trajectories (section 7.6.1). Afterwards the velocity- and depth-components of a moving
particle were smoothed along a trajectory (section 7.6.2). This will be discussed later in this
section.
5. Calibration is performed using the technique presented in section 8.1, using a calibration
target of width d = 1 mm.
106
8.2 Measurements in a Falling Film
Use of the linear positioner
The maximum achievable displacements are limited by the temporal sampling theorem. That
means, that an upper limit for the measurable velocity of a tracer particle is given by its imaged spatial dimensions. An imaged pixel size of 28 µm/pixel was chosen. Assuming a slight
presmoothing of the particle, this limit is about 4 pixels/frame. Illumination and camera allow
a maximum achievable frame rate of about 250 Hz, which has to be divided by two, because
only every second recorded image can be consulted for motion analysis (see Fig. 7.13). Using
a higher frame rate (theoretically up to 1000 Hz), the dynamic range of the gray values cannot
be exploited, because the luminous flux is too low due to the shorter exposure times. With these
specifications we have for the maximum velocity:
4
pixels
28µm
mm
=4
= 14
.
frame
8 ms
s
One can improve this limit by mounting the imaging setup on a linear positioner, which moves
with about half of the expected maximum flow velocity relative to the fluid. In Fig. 8.7, left,
averaged velocity profiles without (top) and with (bottom) the use of a linear positioner are shown.
Here exactly the same flow conditions (in this case Q = 30 l/h, ν = 4.845 mm2 /s, α = 1°)
are applied, so that we can compare the velocity profiles in terms of accuracy. Using a linear
positioner the precision of the depths of the deeper particles is much better. This reflects in the
fact, that the statistics for deeper particles becomes larger, and the averaged values even in the
deeper part of the profile do lie in line with the theoretical parabola (see Eq. (8.2)) very well.
Figure 8.7, right, shows the acquired trajectories in this sequence.
Effects of a Gaussian fit applied to the segmented particles
In the previous experiments the brightest gray value is assumed to be representative for the intensity of a particle. One can also use the outcome of a Gaussian fit to the gray values of a segmented
particles as an intensity measure. In the analysis of Fig. 8.8 both kinds of intensity measures were
used for the calculation of the depth-coordinates. Every point in the cloud represents a measured
particle’s velocity assigned to a measured particle’s depth (zi , ui ). Moreover the averages and
the standard deviations of the particles’ velocities in bins of 50 µm width were calculated. Afterwards the average velocities were fitted to a polynomial of second order. The Gaussian fit does
not reduce the statistical errors significantly.
Effects of the trajectory-based postprocessing
The previous analysis has shown, that the theoretical parabolic velocity profile is reproduced by
the averaged experimental velocity profile in very good agreement. Here averaging is feasible,
because of the presence of a stationary flow. Provided the absence of systematical errors, one can
improve the accuracy of the measurement by elongating the time of the measurement. However,
if an instationary flow is at hand, averaging does not make sense, if one wants to extract the
instantaneous velocity fields, at least, if the time scales of the advection of the flow field are
shorter than the time of averaging. For this reason, we apply the postprocessing techniques,
described in section 7.6. Figure 8.9 illustrates these methods:
107
8 Experimental Results
a
b
sequence: \rieselfilm\glyopt\4g_250Hz_1
sequence: \rieselfilm\glyopt\4g_250Hz_1
u [Pixel/frame]
0
0
0
measured values
0.5
fitted parabola with minimum at (0.059,3.303)
0.5
50
horizontal velocity [Pixel/frame]
1
1
1.5
100
1.5
2
150
2
2.5
2.5
3
200
3
3.5
3.5
0.5
bottom
surface
0
0.5
1
250
0
1.5
depth from surface [mm]
c
d
sequence: \rieselfilm\glyopt\4g_250Hz_lp1
4
50
100
150
200
250
u [Pixel/frame]
sequence: \rieselfilm\glyopt\4g_250Hz_lp1
0
2
2
measured values
fitted parabola with minimum at (0.037,1.135)
1.5
50
1.5
horizontal velocity [Pixel/frame]
1
1
0.5
100
0.5
0
150
0
0.5
0.5
1
200
1.5
1
1.5
0.5
bottom
surface
0
0.5
depth from surface [mm]
1
1.5
250
0
2
50
100
150
200
250
Figure 8.7: Use of a linear positioner in a falling film experiment (Q = 30 l/h, ν = 4.845 mm2 /s, α = 1°,
recorded at 250 Hz, Optimage particles), displayed by means of the velocity profile u(z) and by means
of a velocity-map: a, b No linear positioner used. c, d The recording hardware is mounted on a linear
positioner table, which moves at a speed of 7.5 mm/s respectively about 2 Pixels/frame.
108
8.2 Measurements in a Falling Film
a
rieselfilm/glyopt/4g_133Hz_7 - no fit
b
parabolic fit: y = 2.497x² + 0.266x + 3.285
residuum = 0.013
minimum at (0.053, 3.292)
u [pixel/frame]
parabolic fit: y = 1.908x² + 0.344x + 3.306
residuum = 0.116
minimum at (0.090, 3.327)
u [pixel/frame]
rieselfilm/glyopt/4g_133Hz_7 - Gaussian fit to particles
z [mm]
Figure 8.8: Effects of fitting demonstrated in a falling film experiment (Q < 15 l/h, ν =4.845 mm2 /s,
α = 1°, recorded at 133 Hz, Optimage particles), displayed by means of the velocity profile u(z): a Maximum gray value taken as characteristic brightness parameter for a particle. b Characteristic brightness
parameter is the maximum of a Gaussian fit applied to a 5 × 5-neighborhood of a particle.
• If we plot all vectors, our measurement gets erroneous. The signal-to-noise-ratio of the lowintensity-particles is very poor. To prevent this, we applied a threshold on the minimum
allowed gray value of a particle.
• Smoothing of the z-coordinate or of the u-component helps to reduce the errors significantly.
8.2.4 Simulation of a Laminar Falling Film
In the error analysis in section 5.3 it was argued, that the relative error of the maximum gray
value k = ∆g/g is a function of the particle size σ. The oscillations in the gray value, that
appear, when a particle moves constantly in one direction (see Fig. 7.15) were conjectured to
stem from the effect, that it depends on the position of the particle’s centroid relative to the pixel,
how much photons one pixel can catch. We quantified k(σ) using simulations (see Fig. 8.10).
Further a sequence was simulated, in which the particles move according to a parabolic profile.
The particles were textured as Gaussian blobs, being advected with the flow. To this sequence the
same algorithms were applied as in the previous section. Figure 8.11 shows impressively, that the
oscillations in the gray value are not given by the physics of the flow, but result from the above
addressed error.
8.2.5 Discussion: Laminar Falling Film
The velocity profile in a laminar falling film is well known. Moreover, by tuning its parameters
(inclination angle, viscosity and throughput) flows in a versatile range of maximum velocity and
maximum depth can be generated. Thus it serves well as a physical “ground truth” to test the
method of two wavelengths.
The present measurements reproduce the predicted parabolic velocity profile Eq. (8.2) very
well, if one is allowed to average over “bins” in depth-direction. This procedure is permitted in
the presence of a stationary flow or of a flow whose flow field is slowly variable in time.
109
8 Experimental Results
a
b
c
d
Figure 8.9: Effects of smoothing demonstrated in a falling film experiment (Q = 15 l/h, ν = 3.676 mm2 /s,
α = 1°, recorded at 250 Hz, M3-glass hollow spheres), displayed by means of the velocity profile u(z): a
All vectors, no smoothing. b Only particles with gray value 470 nm > 25 selected. c The depth coordinate
z is smoothed in the trajectories. d Depth coordinate z and velocity component u is smoothed in the
trajectories.
By applying postprocessing-techniques the results could be improved in favour for the applicability to instationary flows. However, the individual (not the averaged) measurement points
(which represent measured velocities at measured depths) exhibit considerable scattering.
The measurement setup had two limitations: Firstly, the Camrecord 2000 high-speed camera
was of rather poor quality regarding spatial homogeneity of the camera sensor (see section 6.4.2
and section 7.1.1). Secondly, due to the relatively fast moving flow, high frames rates and short
exposure times were required, which in turn requests a powerful illumination. The use of a linear
positioner, moving with about half of the flow’s maximum velocity, helped to attenuate the second
limitation.
Simulations show, that the measurement error depends strongly on the particle size, which in
turn affords a camera-sensor with a high resolution.
110
8.3 Measurements in a Convection Tank
σ=0.71
k(σ)=0.24
σ=1
k(σ)=0.14
σ=1.41
k(σ)=0.07
5
0
5
5
0
5
5
0
5
5
0
5
5
0
5
mean error in the gray value k(σ)
0.4
0.35
0.3
0.25
0.2
0.15
0.1
0.05
0
0.5
1
1.5
width of the particles σ [pixels]
2
2.5
Figure 8.10: Quantification of the relative error of the maximum gray value k = ∆g/g in dependency
of σ. Top: Examples of three particles (having the widths σ=0.71, 1 and 1.41) “flowing” over the pixel
lattice. Bottom: For each σ a mean error is calculated and plotted, resulting in the function k(σ).
8.3 Measurements in a Convection Tank
The measurements in the falling film showed, that the technique is capable to reproduce the exact
flow fields (here in form of a 1D-profile), as long as the flow is stationary. In contrast, convective
turbulence represents a flow, which is intrinsically 3D and instationary.
Compared to a falling film, convection is a slow process, so we can use high-quality hardware.
In this section, we will introduce to the theoretical aspects of turbulent convection, address the
measurement setup and show results.
8.3.1 Turbulent Convection - Theory
We are concerned about convection which is driven by buoyancy and by evaporation. For rigorous
mathematical treatment see [Kundu, 1990] and [Koschmieder, 1993].
Buoyancy is realised by heating the fluid in a tank from below [Kundu, 1990]. Small disturbances (which occur because of the inhomogeneity of the boundary conditions) may cause a
high-temperature fluid packet at the bottom to move upward. While raising, the difference of its
111
8 Experimental Results
a
250
250
200
200
30
150
20
y
gray value
y
20
gray value
10
10
30
100
40
150
100
40
50
10
20
30
40
50
50
50
10
x
20
30
40
50
50
x
0
10
20
30
40
0
50
10
x
b
20
30
40
50
x
150
150
100
100
gray value
gray value
gb
gb
50
50
ga
ga
0
0
c
10
20
30
40
50
60
70
0
0
80
1.5
40
60
80
100
20
40
60
frame number
80
100
50
100
200
250
1.5
depth [mm]
1
depth [mm]
20
0.5
1
0.5
0
0.5
0
10
20
30
40
50
frame number
60
70
0
0
80
d
0.4
0.4
0.35
0.35
standard deviation [mm]
standard deviation [mm]
e
0.3
0.25
0.2
0.15
0.1
0.05
0
0.3
0.25
0.2
0.15
0.1
0.05
0
50
100
150
gray value
200
250
0
0
150
gray value
Figure 8.11: Results of simulation of falling film experiments (width of particles 0.71 (left) resp. 2 (right)).
a 2D-image and 1D-profile. b Gray-values of two trajectories. c z-coordinate of two trajectories. d
Velocity-profile u(z) with fitted parabola (red) and ground truth (green). e standard deviations of the
z-coordinate of the trajectories in dependence of their maximum gray-values.
112
8.3 Measurements in a Convection Tank
dry air
evaporation
saturated air
Water
7
T2
b
Rayleigh number
10
6
10
5
κ, ν, α
10
T1
Glycerol
4
10
heat
20
25
30
35
40
bulk water temperature (°C)
Figure 8.12: Left: Sketch of buoyant evaporative convection: The water in the isolated tank is heated
from below. Evaporation at the upper surface causes a transfer of latent heat. The vapour is transported
by “dry air”. Right: Diagram showing the Rayleigh number for water in dependence of temperature
(d=4 cm, T2 =20°C). For comparison Ra for glycerol is plotted also.
density to the density of the colder surrounding increases, which accelerates the upward motion
(positive feedback). Vice versa a low-temperature fluid packet from the top may get in slighter
higher-temperature regions because of random fluctuations. Because now it is heavier than the
surrounding its downward-motion will accelerate. Originally hot and originally cold fluid elements exchange their places; the final result is a continuous circulation.
[Rayleigh, 1916] derived the theoretical requirement of buoyant convection in a layer of fluid
with two free surfaces. He showed that the instability would occur when the ratio
Ra =
buoyancy force
gαd3 ∆T
=
,
viscous force
DH ν
which is known as Rayleigh number, exceeds a certain critical value. This number depends on
the geometry of the fluid layer, which is expressed by its vertical dimension d (its horizontal
dimension is assumed infinite), on the temperature difference from top to bottom surface ∆T =
T1 − T2 and on material properties of the fluid: kinematic viscosity ν, thermal diffusivity DH
and thermal expansion coefficient α. The critical Rayleigh number, which marks the onset of
buoyant convection for a free upper surface and a rigid bottom wall, is about 1100. Here the flow
forms regular patterns like regular polygons, depending on the boundary conditions. As Ra is
further increased, the cell structure becomes more chaotic, and the flow becomes turbulent when
Ra > 54000 [Kundu, 1990].
The heat transfer from water to air occurs via two modes: Firstly the difference in temperature
of air and water causes sensible heat transfer, secondly evaporation causes latent heat transfer.
The cool and thus dense water parcels at the interface plunge into the deep water; the warm and
113
8 Experimental Results
mirror for uniform illumination
LEDs (blue & royal blue)
cooling units
electronics
air flow
inlet for 'dry air' honeycomp to sup- heating
press air turbulence
access for temperature
and humidity sensors
Figure 8.13: Left: Drawing of the convection tank. Right: Convection tank with image acquisition unit.
light ones rise and penetrate into the cool air above. The transition of water from liquid to gaseous
state requires a certain heat volume, which is known as the evaporation heat.
Figure 8.12 shows a schematical sketch of the conditions in our experiments. The Rayleigh
numbers of water for different temperatures in the water-bulk were calculated, where the temperature of the top is set to air temperature: T2 =20°C. Both, thermal expansion coefficient and
viscosity depend strongly on temperature, whereas the thermal diffusivity is assumed to be constant: DH,water = 1.5 × 10−7 m2 /s. For water the Rayleigh numbers are in the highly turbulent
range. Due to the much greater viscosity, the Rayleigh number of glycerol is of orders of magnitude lower (Raglycerol (∆T = 10°C) = 27000. Because we use water-glycerol mixtures, we are
nevertheless in the turbulent range in our experiments.
8.3.2 Turbulent Convection - Setup
For the turbulent convection measurements a tank of dimensions 20×20×4 cm3 was constructed.
The tank was fully made out of acrylic glass, the walls being double-glazed for the purpose of
insulation. It is equipped with sensors for the bulk- and air-temperature and for the relative air
humidity. Two quartz glass windows allow measurements using UV-spectroscopy. Heating was
realized by warming an aluminum plate, which was located under the bottom of the tank, using
attached heat foils.
The image sequences were acquired by a Basler 641f-camera, which has a resolution of 640 ×
480 pixels2 at 30 fps. 55 µm were imaged to one pixel, so that the displayed window was about
3.5 × 2.6 cm2 . The distance between camera and object was about 60 cm, so that the aperture
angle of the objective was only about 3°. Therefore lateral variations of size in the imaged volume
can be neglected, so that this experiment was done without telecentric optics. The light sources
1 and 4 (both royal blue – blue) were installed (see Fig. 6.12). Using light source 4, one can
simultaneously apply a second camera (like an infrared camera). Figure 8.13 shows a drawing
114
8.3 Measurements in a Convection Tank
40
maximum
gray
values
30
gray value
depth z; range: -0.25 (blue) to 1mm (red)
35
25
20
15
10
5
45
0
5
5
470nm
455nm
470nm (smoothed)
455nm (smoothed)
10
15
20
25
frame
30
35
40
45
0.05
0.8
0.04
z [mm]
0.6
0.03
0.4
0.02
0.2
0
0.01
-0.2
5
w [mm/frame]
depth z
and vertical
velocity w
10
15
20
25
frame
30
35
40
0
45
Figure 8.14: Clipping of an example image sequence. Left: depth map of trajectories. Right: The
(smoothed) maximum gray values, (smoothed) depth and vertical velocity of a selected particle moving
along its trajectory are shown.
and a photograph of our tank.
Because an imaged particle should cover about 2 to 5 pixels and for increasing the reflected
intensity, we selected the Potters-particles (see section 6.1.2) to trace the fluid. Goal was to
illuminate a volume of about 1 cm depth, so that their mean size of 100 µm was appropriate. We
used pure deionized water and a water-glycerol-mixture (ratio about 1:1) as fluid.
8.3.3 Turbulent Convection - Results
Experimental results, obtained by measurements in our convection tank are presented. It is shown,
that the technique can be applied to highly complex, full 3D and instationary flows.
Here is an overview of the overall procedure:
1. The acquired image sequence was corrected for illumination inhomogeneity (section 7.1.2)
and background (section 7.1.3).
2. The particles were segmented using the region growing procedure (section 7.2.2). The brightness parameter was the maximum gray value.
3. Like in the falling film experiments 3C-velocities and 3D-positions of the particles were extracted.
4. Postprocessing included connecting the particles to trajectories (Lagrangian representation),
and elongation and smoothing of them (see section 7.6). Moreover by applying the adaptive
115
8 Experimental Results
Table 8.1: Experiment 060831 performed while constantly heating. The heating was started at 6:02 pm.
Figure
Figure 8.16
Figure 8.17
Figure 8.18
Time
6:06 pm
18:39 pm
19:07 pm
Twater
22.98°C
26.50°C
29.00°C
Tair
25.15°C
25.37°C
25.89°C
relative humidity
32%
62%
81%
Gaussian windowing scheme, regularly spaced Eulerian velocity vector fields were obtained.
5. Calibration is performed using the technique presented in section 8.1, now using a calibration
target with a cuvette of d=10 mm width.
Motion perpendicular to the image plane
Figure 8.14 shows a clipping of an example image sequence. Most of the tracer particles swim at
the surface, but some of them are transported in deeper fluid regions and move in other directions.
The vertical velocities, obtained by the optical-flow based technique, fits to the z-coordinates, as
this is demonstrated by following the trajectory of a selected particle.
Turbulent convection varying the temperature difference
The image sequences outlined in table 8.1 are recorded under identical conditions; merely the
temperature difference was varied by heating constantly with a power of 20.8 W. The arising
vapour was transported by dry air, which streamed with constantly five liters per minute through
the tank. Tartracine dye (about 25 mg/liter) and tracer particles (Potters, mean density 1.1 g/cm3 )
were added to the fluid, consisting of a water-glycerol-mixture. Light source 4 (blue–cyan) illuminated a fluid volume of about 3.5 × 2.6 × 1.5 cm3 , operating at a maximum power of 12 W.
The figures, referred to in table 6.4, show:
• Top: The trajectories as seen by the camera, where the colour-information represents the
depth z and the horizontal velocity u.
• Center: Velocity vector fields obtained by the adaptive Gaussian windowing technique starting from the deepest layer (distance from the surface z = 9.5 cm) moving upwards. The
colour code is the same as in the velocity-map of the particles (top, right), but showing the
w-component of the velocity here.
• Bottom: Vertical profiles of the mean and rms velocities (red: u, blue: v, black w).
Figure 8.16 shows the situation four minutes after the heating was turned on. The seeding
particles in the deeper layers move with a maximum speed of about 1 pixel/frame (≈1.5 mm/s).
There is almost no motion in the upper layers. This behaviour is reflected in the vertical profiles.
In Fig. 8.17, where we have heated 37 minutes, the convective motion has become faster (up
to 2 pixels/frame, which is 3 mm/s) and more chaotic. Looking at the interpolated vector fields,
upward and downward moving cells can be identified. The vertical motions tend to neutralize
themselves, i. e. wmean equals zero for all measurable depths, but wrms increases with increasing
depth.
After heating of 63 minutes (see Fig. 8.18) we recognize, that turbulence has further developed,
which is evident looking at the velocity profiles. The fluctuations in horizontal velocity urms and
116
8.3 Measurements in a Convection Tank
Figure 8.15: Vertical profiles of the rms turbulent velocities. Left: Horizontal. Right: Vertical. The three
distinct profiles in each figure refer to different heat fluxes applied.
vrms start near zero at the surface and reach a maximum in a depth of about 5 to 7 mm, then they
are damped. The fluctuations in vertical velocity urms show a different behaviour: They rise with
increasingly depth almost monotonously.
Qualitative comparison with experiments done by other researchers
The phenomenon of natural convection was investigated by other researchers. [Volino and Smith,
1999] simultaneously measured the surface temperature and two-dimensional subsurface velocity
fields using infrared thermography and PIV (see section 3.3.1). They found, that the water surface
is cooled by evaporation and the cooler water plunges into the warmer bulk below. [Flack et al.,
2001] measured the surface temperature of water undergoing evaporative convection using infrared imagery. The subsurface turbulence was measured using two-component LDA. They
conducted the experiments both with a clean surface (absence of shear) and with a surfactant
monolayer present. They observed fundamental differences of the flow fields between the two
cases.
[Bukhari and Siddiqui, 2006] investigated the two-dimensional velocity field beneath the surface in a plane perpendicular to the surface using 2D-PIV. They also measured the vertical profiles
of the velocity fluctuations (see Fig. 8.15). Though they used water instead of a water-glycerolmixture, and the dimensions of their tank was very different compared to ours, a qualitatively
similar behaviour of the profiles can be observed. Their urms exhibits a maximum some millimeters below the surface and declines to greater depths. Their wrms is ascending monotonously with
increasing depth.
8.3.4 Discussion: Turbulent Convection
The measurements in a convection tank differ in two ways from the measurements in the falling
film: Firstly, the motions of the flow are slower, so that a camera with a higher-quality camera
sensor and a higher resolution can be employed. Secondly, the turbulent flow is intrinsically 3D
and instationary.
117
8 Experimental Results
z = 0.95mm
z = 0.75mm
z = 0.55mm
z = 0.35mm
z = 0.15mm
z = 0.05mm
umean/vmean [pixel/frame]
urms/vrms [pixel/frame]
1.5
wmean [cm/frame]
0.8
0.01
0.6
0.005
wrms [cm/frame]
0.025
1
0.02
0.5
0.015
0
0.4
0
0.2
-0.005
0.01
-0.5
0.005
-1
-1.5
0
0.5
1
z[cm]
0
0
0.5
1
z[cm]
-0.01
0
0.5
1
z[cm]
0
0
0.5
Figure 8.16: Experiment 060831. ∆T = −2.17°C. For details see section 8.3.3.
118
1
z[cm]
8.3 Measurements in a Convection Tank
z = 0.95mm
z = 0.75mm
z = 0.35mm
z = 0.15mm
umean/vmean [pixel/frame]
urms/vrms [pixel/frame]
1.5
z = 0.55mm
z = 0.05mm
wmean [cm/frame]
0.8
0.01
0.6
0.005
wrms [cm/frame]
0.025
1
0.02
0.5
0.015
0
0.4
0
0.2
-0.005
0.01
-0.5
0.005
-1
-1.5
0
0.5
1
z[cm]
0
0
0.5
1
z[cm]
-0.01
0
0.5
1
z[cm]
0
0
0.5
1
z[cm]
Figure 8.17: Experiment 060831. ∆T = 1.13°C. For details see section 8.3.3.
119
8 Experimental Results
z = 0.95mm
z = 0.75mm
z = 0.35mm
z = 0.15mm
umean/vmean [pixel/frame]
1.5
z = 0.55mm
z = 0.05mm
urms/vrms [pixel/frame]
wmean [cm/frame]
0.8
0.01
0.6
0.005
wrms [cm/frame]
0.025
1
0.02
0.5
0.015
0
0.4
0
0.2
-0.005
0.01
-0.5
0.005
-1
-1.5
0
0.5
1
z[cm]
0
0
0.5
1
z[cm]
-0.01
0
0.5
1
z[cm]
0
0
0.5
Figure 8.18: Experiment060831 ∆T = 3.11°C. For details see section 8.3.3.
120
1
z[cm]
8.4 Application to Sequences acquired in Context of Biofluidmechanics
The drawback of this kind of flow is, that there is no analytic solution at hand. In this case
we are restricted to qualitative evaluation and to comparative measurements by other researchers.
Looking at the vertical profiles of the rms velocities, we indeed find a qualitatively similar behaviour of our results and the results of [Bukhari and Siddiqui, 2006].
Applying simultaneously an established technique (like thermography), we should be able to
test our method quantitatively. This is under investigation.
8.4 Application to Sequences acquired in Context of Biofluidmechanics
In section 5.1 it was already mentioned, that the method of one wavelength was developed in the
field of biofluidmechanics by [Debaene, 2005]. The biofluidmechanics lab3 , a department of the
Charité, a clinical center in Berlin, Germany, is concerned about the application of engineering
methods to problems encountered in medicine, where the emphasis is on fluid mechanics.
After resuming the medical background of the analysed experiments, the measurement setup
will be sketched. Finally some results of our analysis will be presented, consisting of the determination both of the velocity and of the wall shear rate.
8.4.1 Medical Background and Motivation
The investigation of the flow near the wall of a blood vessel or an artificial organ is of great interest, since a close relationship is known between the characteristics of the flow such as magnitude
and direction of the wall shear stress and biological phenomena such as thrombus formation or
atherosclerotic events. The wall shear stress influences structure and function of endothelial cells
as well as the behaviour of platelets.
Blood is a non-Newtonian, shear-thinning fluid, i. e. the viscosity changes with the wall shear
rate in such a way, that the viscosity decreases with increasing wall shear rate. Reason for this is,
that at low shear rates the blood platelets join in aggregates, which form a high resistance against
the shear forces. While the shear forces increase, these structures dissolve and the viscosity
decreases. Moreover, the platelets change their orientation and their form, while the shear forces
rise, which results in an additional loss of internal friction. At high shear rates (above 100 s−1 )
blood can be approximated by a Newtonian fluid with a viscosity of η ≈ 3.5 × 10−3 Pa·s.
Together with the non-Newtonian characteristics of blood, the flow in blood vessels has an
impact on blood-clotting. Other factors for intravascular blood-clotting might be a damaged
endothelian layer or the contact with a foreign surface. Therefore in 1856, Robert Virchow postulated the interaction of blood flow, vessel wall and blood properties, today known as ‘Virchow’s
Triad”, which is largely confirmed by clinical experience. A precise measurement of the wall
shear stress should help to quantify the complex interrelations.
8.4.2 Measurement Setup at the Charité
Ultimate goal for the Charité group is to develop a versatile method to measure the spatial distribution of the wall shear rate in models of blood vessels or artificial organs. Occurring difficulties
include the instationary, pulsatile flow and curved and moving walls. The Charité group used
exactly spherical, monodisperse particles, which are of 300 µm in diameter. Thus it is sufficient
3
http://www.charite.de/biofluidmechanik/
121
8 Experimental Results
a
b
c
Figure 8.19: a Linear glass tank with a rectangular cross-section. b U-shaped channel with rectangular
step. c Computed path lines in a parallel cut to the x − y-plane at y = Db /2 in the U-shaped channel with
step (from [Debaene, 2005]).
to illuminate using one wavelength, which improves accuracy and simplifies both measurement
setup and analysis. The Berlin group chose a blue dye (patent blue V) and an illumination setup,
consisting of LEDs emitting in the red spectral range (peak wavelength 639 nm). [Debaene, 2005]
separated the near-wall flow in two layers by means of gray-value thresholding. For each layer,
which is characterized by a distinct distance from the wall, the motion of the particles was determined with a conventional PIV algorithm (for PIV see section 3.3.1. This resulted in a vector
field u(x) for each layer.
We applied our algorithmics to two sequences, each one dealing with flow in a distinct geometry.
Spatially homogeneous and stationary flow
A mixture of water and glycerol (ν = 10.96 × 10−6 m2 s−1 ) flows through a linear glass tank
with a rectangular cross-section (see Fig. 8.19, a). This results in following velocity profile in the
y − z-plane (for its derivation see [Debaene, 2005]):
v(z) = 6vmean
122
µ
z
z2
− 2
Lt Lt
¶
,
8.4 Application to Sequences acquired in Context of Biofluidmechanics
Figure 8.20: Velocity profile of an image sequence (marked as “STA3” in [Debaene, 2005]). Both, the
output of our optical-flow-based computation (black) and the PIV-analysis of the Charité-group (red) are
in line with the analytical profile (green) within their statistical errors.
with the mean vertical velocity vmean = 11.96 mm/s and the channel depth Lt = 5 mm. In
contrast, the flow field in the x − y-plane is nearly homogeneous, which was verified conducting
a conventional PIV-measurement.
The experiment was conducted by illuminating the glass tank frontally using diffuse light. A
high speed camera recorded the scene running at 125 frames/s at a resolution of 512 × 480 pixel2 .
Stationary, complex flow in a U-shaped channel
A water-glycerol mixture (ν = 8.372 × 10−6 m2 s−1 ) flows through a U-shaped channel with a
rectangular cross-section and a step (see Fig. 8.19, b). In combination with the bending, the step
in the cross-section generates a complex flow which detaches from the wall (see Figure 8.19, c).
Figure 8.23, left shows a sample image of the recorded sequence.
8.4.3 Results of our Analysis
Spatially homogeneous and stationary flow: velocity profile
First results concerning the flow in the linear glass tank are shown. Because we deal with a
spatially homogeneous and stationary flow, we are mainly interested in the velocity profile. The
image sequence was already corrected for illumination and background by the Charité-group.
The procedure of the image analysis was as follows:
1. The particles were segmented using a fit of Gaussians (section 7.2.3. The characteristic brightness parameter of a particle was its maximum gray value. Further we calculated a depth map
assigning every particle to a z-coordinate, which can be directly calculated from the maximum
123
8 Experimental Results
z from 0.195 to 0.215
z from 0.215 to 0.235
z from 0.235 to 0.255
z from 0.255 to 0.275
z from 0.275 to 0.295
z from 0.295 to 0.315
0
5
10
15
20
25
30
Figure 8.21: Left: Velocity vector fields in various depths. Right: Distribution of the wall shear rate using
the velocity data.
gray value via the inversion of Beer-Lambert’s law (Eq. (5.4)). The calibration parameters g0
and z∗ are provided by the Charité-group.
2. The velocity vector fields are extracted using the method of optical flow. Because wallparallel motion is assumed, the simple optical flow-constraint without brightness-variations
was used (Eq. (4.2)).
Figure 8.20 shows the results of this analysis. Each point in the cloud represents a single flow vector in the sequence. Moreover the averages over 10 µm wide bins were calculated and displayed
together with their error bars. The averaged values lie about 10 to 15% above the analytical solution, which is marked by a solid line. [Debaene, 2005] averaged spatially the vector fields they
obtained by applying PIV to the thresholded layers. These are indicated as red horizontal bars.
Stationary, complex flow in a U-shaped channel: 3D2C velocity field
Using the same procedure as in the previous paragraph the velocity vector fields were estimated,
given an image sequence recorded by the Charité-group. In Fig. 8.21, left, the velocity vector
fields of various depths were plotted.
Stationary, complex flow in a U-shaped channel: Wall shear rate
Using the vector fields, the field of the wall-shear rate (γx (x, y), γy (x, y)) was estimated. Therefore, ordinary least squares-fits to the local velocity vector profiles were applied, followed by
calculating the magnitude of the resulting wall shear rate-vectors γm (x, y):
(u(x, y, z), v(x, y, z))T
124
OLS-fit
→ (γx (x, y), γy (x, y))T → γm (x, y) =
q
γx2 + γy2 .
8.4 Application to Sequences acquired in Context of Biofluidmechanics
z
∆u
∆z
0
0
u
Figure 8.22: Estimation of the wall shear rate: In the vicinity of the wall the tangent to the velocity profile
(∂u/∂z) can be approximated by a line, which intersects the origin (no-slip-condition) and has a slope,
which equals the difference quotient (∆u/∆z).
Here “local” means, that the input to the fit were blocks of 33×33 pixels2 . We implicitly assumed,
that we can replace the differential quotients by the difference quotients in the definition of the
wall shear rate (see Fig. 8.22). This is allowed, because the visible particles fall within a region,
where the velocity distribution is considered to be proportional to the wall distance:
(γx , γy ) =
·
∂u ∂v
,
∂z ∂z
¸
z=0
≈
∆u
.
∆z
In Section 7.7.1 the parametrization of three dimensional physical flow fields. was addressed.
We already saw, that the wall shear rate can be regarded as components of the more general
velocity gradient tensor: (γx , γy ) = (γ13 , γ23 ). Therefore Eq. (7.6) could be applied to the
data. Since the diameter of the spheres is significantly larger than the penetration-depth, only
the particles close to the wall are imaged and no overlapping particles are recorded. Therefore,
the problem of multiple-motions does not occur in this situation. Though not having a uniform
flow, (7.7) could be applied since the components of the velocity gradient tensor containing the
derivatives w. r. t. z are large compared to the components containing the derivatives w. r. t. x and
y.
The components γ13 and γ23 of the wall shear rate are mapped in Fig. 8.23, right. Since
the examined flow is stationary, the shear rate is averaged over 300 frames. The gaps in the
otherwise dense flow field indicate the spots, where the confidence measure, which is provided
by the structure-tensor-technique, was to low for providing a reliable result.
Figure 8.24 displays the magnitude of the wall shear rate, obtained with different techniques,
and compares the methods to each other. To establish some kind of “ground truth” [Debaene,
2005] computed the flow numerically with the solver FLUENT64 , which is shown in Fig. 8.24,
top, left. The analysis of the flow using the PIV-technique and subsequent derivation of the wall
shear rate, as carried out by [Debaene, 2005] is shown in Fig. 8.24, top, center. Our result is
mapped in Fig. 8.24, top, right. In order to compare the techniques with each other, the gaps
were filled by means of interpolation, and the result was smoothed afterwards smoothed using a
2D-anisotropic diffusion.
4
Fluent Inc., Lebanon, USA; http://www.fluent.com/
125
8 Experimental Results
γ 13
20
0
γ 23
20
20
0
20
Figure 8.23: Left: Example image of the recorded image sequence. Right: Maps of the wall shear rates
estimated with our algorithm.
Our optical-flow-based method provides a dense, highly resolved vector field of the wall shear
rate, which is capable of estimating this quantity at positions where the PIV-method fails. The
flow-detachment in the lower left corner can, for instance, be reproduced very well. Both techniques, optical flow and PIV, show a deficit of the wall shear rate in the upper left corner, and a
surplus in the upper right corner, compared to the analysis provided by computational fluid dynamics. These systematical deviations may occur as a result of the fact, that the particles, which
are of a comparable large size, cannot follow the fluid ideally, or influence the fluid. To provide
a measure of how much the results of the experimental methods are apart from the numerical
solution, we added up the magnitudes of the differences on each pixel. Optical flow resulted in
about 10% better results, than PIV, when the CFD-solution was regarded as the “ground truth”.
Besides that, the optical flow analysis yielded a much better spatial resolution, and the area, where
a reliable estimate of the wall shear rate is possible, is about 30% greater compared to the area,
obtained using the PIV analysis.
8.4.4 Discussion: Biofluidmechanics
Both, the reconstruction of the velocity fields and the direct estimation of the wall shear rates,
were applied to the image sequences provided by [Debaene, 2005] and underwent a quantitative
evaluation. The stationary flow through a tank with a rectangular cross-section can be solved
analytically - our results lie 10 to 15% above the predictions. The Charité group has computed
the more complex flow in a U-shaped channel with a rectangular cross-section numerically, which
serves as reference in order to evaluate our reconstruction of the wall shear rate. Compared to
the PIV-analysis, the method, proposed in this thesis, provides a higher resolution and a greater
coverage, so that the essential features of the flow become clearly visible.
126
8.4 Application to Sequences acquired in Context of Biofluidmechanics
CFD
10
PIV
20
30
10
CFD-PIV
10
0
OF
20
30
10
CFD-OF
10
10
0
20
30
OF-PIV
10
10
0
10
Figure 8.24: Top: Wall shear rates determined by computational fluid dynamics (left), PIV-technique
(center) and our optical-flow-based method (right). Bottom: Pointwise Differences between CFD and PIV
(left), CFD and optical flow (center), optical flow and PIV (right).
127
8 Experimental Results
128
9 Summary and Outlook
The aim of this thesis has been the development and testing of a technique for the spatio-temporal
analysis of flows close to free water surfaces. Knowledge about the 3D-flow fields within and
beneath the water-side viscous boundary layer is vitally important for our understanding of the
transport of momentum and mass. Previous methods applied to this question commonly rely on
2D laser light sections followed by velocity reconstruction based on cross correlation. The new
method overcomes the restriction of planar dimensionality by both a sophisticated experimental
setup and data analysis based on contemporary image processing techniques.
9.1 Summary
A fluid volume is illuminated by light emitting diodes. Small spherical particles are added to
the fluid, functioning as a tracer. A camera pointing to the water surface from above records
the image sequences. A dye is added to the fluid, which limits the penetration depth of the
light into the flow according to Beer-Lambert’s law. Within the illuminated layer the particles
appear more or less bright, depending on their normal distance to the wall: Particles near the
wall appear brighter, i. e. have a higher gray value than particles farther away from the wall.
Illuminating the fluid alternatively with LEDs of two different emission spectra, which overlap
with the absorption spectrum of the dye unequally, the size of a particle has not to be known,
so that the size distribution of the spheres is allowed to be broad. A proper selection of tracer
particles, dye and LEDs is essentially for the presented method. Like in all quantitative flow
visualisation techniques, optics and camera have to be adjusted to the experiment.
Information about the 3D position of the particles in the fluid is retrieved by 2D grey value
data. Special care has to be taken concerning the preprocessing of the image sequences and segmentation of the particles. The three components of a particle’s velocity are obtained applying
an extended optical-flow based approach. The information gained by previous optical-flow computation was used to obtain a better performance for particle tracking. Thus both, the Eulerian
velocity field and the Lagrangian flow-representation can be extracted. Postprocessing tackles the
problem of overlapping particles and enhances the length of the trajectories. In order to determine
the wall shear stress, the wall shear rate can be estimated directly without previous computation
of the velocity vector field using parametric modeling of 3D affine flow fields.
Both, measurement setup and algorithmics are tested in several ways:
1. For correct depth-reconstruction an easy-to-handle two-point-calibration technique was employed. The applicability of Beer-Lambert’s law was validated by measurements using a
linear positioner.
2. A laminar falling film serves as reference flow. The theoretical parabolic profile of this stationary flow could be reproduced by measurements very well.
129
9 Summary and Outlook
3. Buoyant convective turbulence acts as an example for an instationary, inherently three dimensional flow. Qualitative investigations and comparisons with the appropriate literature show,
that the results are physically plausible.
4. The direct estimation of the wall shear rate was applied to sequences recorded in the field of
biofluidmechanics. Our results are in good agreement with the “ground truth” and outperform
the analysis based on PIV which is a standard method in experimental fluid mechanics.
9.2 Outlook
The technique described in this thesis constitutes the first part of a comprehensive research project
embedded in the priority program 1147 of the German Research Foundation. Ultimate goal of the
project is the voluminetric analysis of flows close to moving and wave-driven curved interfaces.
Further steps to achieve this ambitious aim are:
• By performing simultaneous measurements using other techniques, like infrared thermography, we can test the applicability of the current setup and algorithmics to instationary 3Dflows like turbulent convection quantitatively.
• Surface divergences and convergences are related to the transport of momentum and mass.
Modeling of 3D affine flow fields can be used for a direct estimation of these important
quantities. Testing can be accomplished using the convection tank designed in the scope of
this thesis.
• The measurement setup will be installed in a linear wind-wave tank. Using this facility, we
can generate a shear flow. At flows with moderate speeds or applying surfactants, i. e. without
waves present, the velocity profile obeys the universal wall law discussed in section 2.3.2.
This yields a method to validate the computation of the wall shear stress.
• The analysis of wavy flows is a great challenge, due to refraction and geometrical distortions
at the curved surface. In order to get knowledge of the geometry of moving boundary conditions, flow measurement has to be supplemented by simultaneous recording of the wave
slope.
130
A Total Least Squares
An excellent overview on the TLS-Problem is given by [van Huffel and Vandewalle, 1991]. The
term “total least squares” was introduced by [Golub and van Loan C. F., 1980], though it was
known in statistical literature as orthogonal regression or errors-in-variables regression since the
19th century. [Naya et al., 2006] give a critical overview on the errors of OLS (ordinary least
squares), TLS and equilibrated TLS, addressing the estimation of a 2D homography as a reference
problem.
For our purpose we will merely summarize the key aspects of data regression using an analysis
based on OLS, TLS, weighted TLS and equilibrated TLS. Fitting a straight light serves as simple
example to illustrate the benefits of error-in-variables regression. The last section of this chapter
presents the TLS estimation from normal equations. We will see, that diagonalising the structure
tensor J = D T D is equivalent to a TLS fit to the data matrix D.
A.1 Ordinary Least Squares
To introduce into the subject of TLS, we will treat the case of ordinary least squares (OLS) firstly.
A regression problem can be summarised in the matrix-vector-equation:
Am = b,
where A ∈ Rm×n is the data matrix, m ∈ Rm is the parameter vector and b ∈ Rn is the
observation vector. Fitting n points in the x−y-plane to a straight line for example, A = (xi , 1) is
a 2 × n-matrix, containing the points, m is the two-dimensional parameter vector, and b contains
the corresponding y-values. The task, to be performed, can be expressed as a minimisation
problem:
°
°
°
°
°b − b̂° → min,
2
where b̂ denotes the optimised observation vector. In the example above, b̂ would be part of the
line, which can be expressed by the parameters m. The optimal solution in an OLS-sense is given
by applying the Moore-Penrose Inverse (AT A)−1 AT to the observation vector:
m = (AT A)−1 AT b.
[Gauss, 1823] showed, that the estimate m has the smallest variance in the class of estimation
methods, which display no systematic error in the estimates and whose estimates are linear functions of b. One kind of differential optical flow technique is based on OLS: the local weighted
least squares-method originally proposed by [Lucas and Kanade, 1981].
131
A Total Least Squares
A.2 Total Least Squares
Now we allow, that besides the data matrix A, the observation vector b also contains errors. Here
it is assumed, that the noise is the same for the data matrix and for the observation vector. Now
we arrive at a slightly different minimisation problem:
°
°
°
°
°(A, b) − (Â, b̂)° → min, (A, b), (Â, b̂) ∈ Rm×(n+1) ,
F
where k·kF denotes the Frobenius norm, that is kAk =
This can be transformed to
(D T p)2 → min
qP P
i
j
|aij |2 .
subject to pT p = 1,
where D = (AT , −b)T ∈ Rm×(n+1) is a matrix, composed of the data matrix A and the observation vector b, and p is a new parameter vector of dimension n + 1. In order to avoid the
trivial solution, we require the norm of the parameter vector to be unity. This formulation of
the TLS-problem is also known as the orthogonal L2 approximation problem. The sought model
parameters m can be estimated from the parameter vector p, by dividing the first n components
by the (n + 1)th component:
mi = pi /pn+1
for
i = 1, . . . , n − 1.
In the following we will address the equation
DT p = 0
(A.1)
as the basic TLS-equation. The minimization problem can be written in integral form as follows:
Z
(D T (x′ , t′ )p)2 dx′ dt′ → min subject to pT p = 1,
U
where d(x, t) are the rows of the data matrix D, and the integration is performed over a neighbourhood U , to sufficiently constrain the problem.
A.3 Weighted Total Least Squares
In the formulation of the TLS problem from the previous section, the rows in the data matrix D
were treated equal. But, because in optical-flow-based motion estimation for example, it is the
case, that the observations are weighted differently depending on their relative location in a local
neighbourhood, we have to reformulate our minimisation problem to:
Z ∞
w(x − x′ , t − t′ )(D T (x′ , t′ )p)2 dx′ dt′ → min subject to pT p = 1.
(A.2)
−∞
We have introduced a weighting function w, which weighs the point in the spatio-temporal neighbourhood differently, according to their relative significance. We arrive at another representation
of the WTLS-Problem by multiplying the data matrix D with a left hand weighting matrix W L :
(W L D)p → min
132
A.4 Equilibrated Total Least Squares
A.4 Equilibrated Total Least Squares
In the previous section we have addressed the problem of weighting each data point individually.
Now we arrive at the question: What has to be done, if the errors of the variables haven’t to be
treated equally? Consider for example the OLS-case: Here we have no errors in the observations,
so it is not feasible to apply a TLS-solver to such a problem. This is an extreme case, but you can
imagine other situations, where for example the errors in the observations (“y-axis”) are half of
the errors in the data (“x-axis”). [Mühlich and Mester, 1999] have developed a framework, how
to incorporate this problem, and we will summarise it here shortly.
The key is introducing a diagonal right hand weighting matrix W R = diag(1/σi ), which
contains the errors in the parameters. Our minimisation problem reads now:
(W L DW R )p → min
Applying diagonal equilbration matrices can be interpreted as row and column scaling of the
data matrix D. This procedure is well known and is often used to obtain numerically stable
results [Golub and van Loan, 1996].
A.5 Example: Fitting a Straight Line
In this section we will summarise the content presented in this section by applying OLS, TLS and
equilibrated TLS (E-TLS) to the simple problem of fitting a straight line to the data (xi , yi ). The
line can be described by the equation
y = mx + b,
where m is the slope, and b is the intercept.
Reformulating the problem in OLS-notation yields
Am = b,
with A = (xi , 1), m = (m, b)T , b = (yi ).
We get the optimal parameters of the line in OLS-sense by applying the Moore-Penrose-Inverse:
m = (AT A)−1 AT to b. OLS is optimal, if we have Gaussian noise in the x-coordinates only.
We are better off using a TLS-solver, if we have the same degree of noise in both x- and
y-coordinates. In TLS we use a slightly different notation:
Dp = 0,
with D = (AT , −b)T = (xi , 1, −yi ), p = (m, b, 1).
This normal equation can be solved by means of a Lagrange multiplier, which will be explained
in the next section.
Finally we will take advantage of the benefits of equilibration. Therefore we assume, that for
instance the noise in the x-coordinate is σx = 1 and in the y-coordinate it is σy = 0.5. Starting
from this, we can construct a right hand weighting matrix:


 
1/σx 0
0
1 0 0


 
W R = diag(1/σi ) =  0
1
0  =  0 1 0 .
0 0 2
0
0 1/σy
133
A Total Least Squares
5
4
3
2
Y
1
0
−1
data
model
OLS
TLS
E−TLS
−2
−3
−4
−2
−1.5
−1
−0.5
0
X
0.5
1
1.5
2
Figure A.1: Numerical example of fitting a straight line: The model is y = 2x + 1. The noise is σx = 1
and σy = 0.5 The results of OLS, TLS and equilibrated TLS are displayed for one selected experiment. It
is evident, that E-TLS performs best, while TLS is slightly worse and OLS is the worst.
Our E-TLS-Problem now becomes:
(DW R )p = (xi , 1, −2yi )(m, b, 1)T = 0.
It is evident, that equilibration is nothing else, than column scaling of the data matrix D. A
numerical example of fitting a straight line is given in Figure A.1.
A.6 TLS Estimates from Normal Equations
In this section we will derive the structure tensor starting from the examinations made in the
previous chapters. Our starting point is Eq. (A.2): The weighted total least squares problem
together with the necessary requirement kpk = 1, applied to spatio-temporal data d(x, t) in a
weighted spatio-temporal neighbourhood. We have to solve a minimisation problem. It would
be a good idea, to incorporate this requirement into the functional to be minimised. This can be
done by means of introducing a Lagrangian multiplier:
E=
Z
∞
−∞
"
Ã
w(x − x′ , t − t′ ) (dT (x′ , t′ )p)2 + λ 1 −
n
X
i=1
p2i
!#
dx′ dt′ → min
The energy functional E reaches a minimum, when the derivatives with respect to all variables
vanish. These derivatives are:
Z ∞
¤
£
∂Ei
w(x − x′ , t − t′ ) di (dT (x′ , t′ )p) − λpi dx′ dt′ for i = 1 . . . n.
=2
∂pi
−∞
134
A.6 TLS Estimates from Normal Equations
If we require a constant parameter field p in our local neighbourhood U , then the pi can be taken
out of the integral:
Z
Z
′ ′
di dn dx′ dt′ = λpi for i = 1 . . . n,
di d1 dx dt + . . . + pn
p1
U
U
R
where we assumed the weight function to be normalised for the right hand side: wdx′ dt′ = 1.
These n equations can be written in vector-matrix-form, introducing the structure tensor J :
Z ∞
w(x − x′ , t − t′ )d(x′ , t′ )d(x′ , t′ )T dx′ dt′ .
(A.3)
J p = λp with J =
−∞
It follows, that Eq. (A.2) is minimized by the Eigenvector pn to the smallest Eigenvalue λn of
J . We have arrived at an equation, which is similar to the Eigenvalue equation Eq. (4.19). The
difference is, that now the parameter vector p represents the Eigenvectors. So Eq. (A.3) is a
generalization of the former Eigenvalue problems. It can be transformed to Eq. (4.19), if we
consider a spatio-temporal problem, where n = 3:
p = (px , py , pt )T ≡ r 3 = (r3x , r3y , r3t ).
By deducing this TLS solution mathematically rigorously instead of inducing it intuitively, we
have laid the foundations for solving more complicated TLS-problems.
135
A Total Least Squares
136
B The Lie Group of Continuous
Transformations
B.1 Generalization of the Affine Subgroup in 2D
The affine group is a subgroup of the Lie group of continuous transformations. An excellent
overview over the mathematical prerequisites is given in [Olver, 1986]. [Bluman and Kumei,
1989] present this topic from the view of applied mathematicians. In [Kanatani, 1990] some
applications of the Lie group of continuous transformations to computer vision are presented.
In section 4.3.1 we have taken a look at affine transformations of the kind f = t + Ax + at.
Now we will replace the flow vector f by a generalised transformation r = S(r ′ , a), where a
is a parameter vector (r, r ′ ∈ RN , a ∈ RP ). S forms a one-parameter Lie group of continuous
transformations if following properties hold (amongst others):
1. There exists an identity element of the group: r = S(r, 0).
2. There exists the inverse transformation: r ′ = S −1 (r, a).
3. a is a continuous parameter in a given interval in RP .
4. r = S(r ′ , a) is infinitely differentiable in r and analytical in a. A real function is analytical,
if it can be expressed by a power series locally.
From the last property, it follows, that we can expand the vector r in a Taylor series about a = 0:
rj =
Sj (rj′ , a)
≈
Sj (rj′ , 0)
+
P
X
i=1
P
X
∂Sj (rj′ , a)
∂rj
′
ai
ai
= rj +
∂ai
∂ai
for
j = 0 . . . N. (B.1)
i=1
Now we expand the brightness function g(r) about r ′ with respect to the parameters ai :
′
g(r) ≈ g(r ) +
P
X
i=1
ai
∂g(r)
,
∂ai
(B.2)
using following expressions for the brightness function’s dependence of the transformation parameters (together with Equation (B.1)):
N
N
j=0
j=0
X ∂g ∂Sj (rj , a)
∂g(r) X ∂g ∂rj
=
=
≡ Li g(r),
∂ai
∂rj ∂ai
∂rj
∂ai
where Li , i ∈ {1, . . . , P } is the infinitesimal generator of the Lie group. With the assumption of
brightness conservation g(r) = g(r ′ ) along the trajectory, Eq. (B.2) becomes
P
X
i=1
P
ai
∂g(r ′ ) X
ai Li g(r ′ ) ≡ (Lg)T a = 0,
=
∂ai
(B.3)
i=1
137
B The Lie Group of Continuous Transformations
which is a more general version of the EBCCE (see Eq. (4.20)).
Example: Translation. The corresponding coordinate transformation reads
r ′ = S(r, a = (t, 1)T ) = (x, y)T + t,
where t = (t1 , t2 )T denotes the translation vector, and r = (x, y, t) denotes a point in the
2+1 dimensional space. The infinitesimal generators are calculated to:
L1 =
∂S1 ∂
∂S2 ∂
∂S3 ∂
∂
+
+
=1
+ 0 + 0,
∂a1 ∂r1
∂a1 ∂r2
∂a1 ∂r3
∂x
L2 =
∂
,
∂y
and
L3 =
∂
,
∂t
so that the EBCCE Eq. (B.3) becomes
3
X
ai Li g(r ′ ) = t1
i=1
∂
∂
∂
g + t2 g + 1 g = 0,
∂x
∂y
∂t
which can be identified as the BCCE.
Example: Affine Transformation. The corresponding coordinate transformation reads
r ′ = S(r, a = (t, A, 1)T ) = A(x, y)T + t,
where A = (aij ) is a 2 × 2 matrix, and t = (t1 , t2 )T denotes the translation vector. Again
r = (x, y, t) is a point in the 2+1 dimensional space. The infinitesimal generators are in this
case:
L1 =
∂S1 ∂
∂S2 ∂
∂S3 ∂
∂
∂
+
+
=x
+ 0 + 0, L2 = y ,
∂a1 ∂r1
∂a1 ∂r2
∂a1 ∂r3
∂x
∂y
∂
∂
∂
∂
, L6 =
, and L7 = .
L4 = y , L5 =
∂y
∂x
∂y
∂t
L3 = x
∂
∂y
so that the EBCCE reads in this case
P
X
i=7
ai Li g(r ′ ) = a11 x
∂
∂
∂
∂
∂
∂
∂
g + a22 y g + a12 x g + a21 y g + t1 g + t2 g + g.
∂x
∂y
∂y
∂x
∂x
∂y
∂t
B.2 Generalization of the Affine Subgroup in 3D
Again, we loose the assumption of having an affine transformation, and replace the vector u by
a generalised transformation r = S(r ′ , a), like we did in the previous section. Amongst others,
Eq. (B.3) holds for the 3D-case, too.
Example: Rotation About the x-Axis. The rotation about the x-axis, which is essentially
the first component of the vorticity vector, is given by
ω1 = ǫij1 γij = γ23 − γ32 =
138
∂u2 ∂u3
−
.
∂x3 ∂x2
Bibliography
The corresponding coordinate transformation reads
r ′ = S(r, a = (γ23 , γ32 , 1)T ) = (0, γ23 z, −γ32 y, 0)T ,
where r = (x, y, z, t)T denotes a point in the 3+1 dimensional space. The infinitesimal
generators are calculated to:
L1 =
∂
∂S1 ∂ ∂S2 ∂ ∂S3 ∂ ∂S4 ∂
+
+
+
= 0+z +0+0,
∂a1 ∂r1 ∂a1 ∂r2 ∂a1 ∂r3 ∂a1 ∂r4
∂y
L2 = −y/z∗ ,
L3 =
∂
,
∂t
so, that the three dimensional extension of the EBCCE Eq. (B.3) reads in this case
3
X
i=1
ai Li g(r ′ ) = γ23 z
∂
g
∂
g − γ32 y + .
∂y
z∗ ∂t
Note, that for building the derivatives we used an extension of the augmented gradient:
˜ T , ∂/∂t) = (∂/∂x, ∂/∂y, 1/z∗ , ∂/∂t).
∂/∂ri = (∇
139
Bibliography
140
Bibliography
R. J. Adrian. Particle-imaging techniques for experimental fluid mechanics. Annu. Rev. Fluid
Mech., 23:261–304, 1991. 28
J. C. Agüi and J. Jimenez. On the performance of particle tracking. Journal of Fluid Mechanics,
185:261–304, 1987. 93, 94
H.-E. Albrecht, M. Borys, N. Damaschke, and C. Tropea. Laser Doppler and phase Doppler
measurement techniques. Springer Verlag, Berlin, Heidelberg, 2003. 27
G. Balschbach, J. Klinke, and B. Jähne. Multichannel shape from shading techniques for moving
specular surfaces. In Computer Vision - ECCV’98. Springer, Berlin, 2001. 17
M. L. Banner. The influence of wave breaking on the surface pressure distribution in wind-wave
interactions. J. Fluid Mech., 211:463–495, 1990. 21
M. L. Banner and W. L. Peirson. Tangential stress beneath wind-driven air-water interfaces.
Journal of Fluid Mechanics, 364:115–145, 1998. 21, 22
M. L. Banner and O. M. Phillips. On the incipient breaking of small scale waves. J. Fluid Mech.,
65:647–656, 1974. 18
J. L. Barron, D. J. Fleet, and S. S. Beauchemin. Performance of optical flow techniques. Intern.
J. Comput. Vis., 12(1):43–77, 1994. 35, 36
R. J. M. Bastiaans, G. A. J. van der Plas, and R. N. Kieft. The performance of a new PTV
algorithm applied in super-resolution PIV. Experiments in Fluids, 32:346–356, 2002. 33
S. Beucher. The watershed transformation applied to image segmentation. In Conference on
Signal and Image Processing in Microscopy and Microanalysis, pp. 299–314, 1991. 82
M. J. Black and A. P. The robust estimation of multiple motions: parametric and piecewisesmooth flow fields. Computer Vision and Image Understanding, 63:75–104, 1996. 46
G. W. Bluman and S. Kumei. Symmetries and differential equations. Springer Verlag, Berlin,
Heidelberg, New York, 1989. 137
D. Braun. Die effektive Diffusion im welligen Rieselfilm. Phd thesis, Rheinisch-Westfälisch
Technische Hochschule Aachen, 1969. 104
C. Brücker. 3-D PIV via spatial correlation in a color-coded light-sheet. Experiments in Fluids,
21:312–314, 1996. 31
141
Bibliography
S. J. K. Bukhari and M. H. K. Siddiqui. Turbulent structure beneath air-water interfaces during
natural convection. Physics of Fluids, 18(035106), 2006. 117, 121
H.-C. Chang and E. A. Demekhin. Complex wave dynamics on thin films. Elsevier, 2002. 104
M. Coantic. A model of gas transfer across air-water interfaces with capillary waves. Journal of
Geophysical Research, 91:3925–3943, 1986. 14
I. Cohen and I. Herlin. Non uniform multiresolution method for optical flow and phase portrait
models: environmental applications. International Journal of Computer Vision, 33(1):24–49,
1999. 45
E. D. Cokelet. Breaking waves. Nature, 267:769–774, 1977. 18
T. Corpetti, E. Memin, and P. Perez. Dense estimation of fluid flows. IEEE Trans. on Pattern
Analysis and Machine Intelligence, 24(3):365–380, 2002. 38, 48
T. Corpetti, D. Heitz, G. Arroyo, E. Memin, and A. Santa-Cruz. Fluid experimental flow estimation based on an optical-flow scheme. Exp. Fluids, 40(1):80–97, 2005. 48
E. Cowen and S. Monismith. A hybrid digital particle tracking velocimetry technique. Experiments in Fluids, 22:199–211, 1997. 33, 90
G. T. Csanady. The role of breaking wavelets in air-sea gas transfer. J. Geophs. Res., 95:749–759,
1990. 17, 20
P. V. Danckwerts. Significance of liquid-film coefficients in gas absorption. Industrial and Engineering Chemistry, 43:1460–1467, 1951. 13
E. L. Deacon. Gas transfer to and across an air-water interface. Tellus, 29:363–374, 1977. 16
P. Debaene. Neuartige Messmethode zur zeitlichen und örtlichen Erfassung der wandnahen Strömung in der Biofluidmechanik. Phd thesis, TU Berlin, 2005. 3, 49, 99, 121, 122, 123, 124,
125, 126
K. Degreif. Untersuchungen zum Gasaustausch: Entwicklung und Applikation eines zeitlich
aufgelösten Massenbilanzverfahrens. Phd thesis, Universität Heidelberg, 2006. 14
M. Detert, G. H. Jirka, M. Jehle, M. Klar, B. Jähne, H.-J. Köhler, and T. Wenka. Pressure
fluctuations within subsurface gravel bed caused by turbulent open-channel flow. In Proc. of
River Flow 2004, pp. 695–701. A. A. Balkema Publishers, 2004. 24
J. Dieter and B. Jähne. Flow Measurements close to the free air/sea interface. In Dieter, J.
and Jähne, B., eds., Proceedings of the 7th International Symposium on Applications of Laser
Techniques to Fluid Mechanics. Lisbon, July 1994. 21
F. Durst, A. Melling, and J. H. Whitelaw. Principles and practise of laser-Doppler anemometry.
NASA STI/Recon Technical Report A, 76:47019–+, 1976. 27
H. Eckelmann. Einführung in die Strömungsmesstechnik. Teubner, Stuttgart, 1997. 24
142
Bibliography
D. Engelmann. 3D-flow measurement by stereo imaging. PhD thesis, University of Heidelberg,
2000. 32
G. Farnebäck. Fast and accurate motion estimation using orientation tensors and parametric
motion models. In Proc. ICPR, vol. 1, pp. 135–139. Barcelona, Spain, 2000. 43
H. H. Fernholz, G. Janke, M. Schober, P. M. Wagner, and D. Warnack. New developments and
applications for skin-friction measuring techniques. Meas. Sci. Technol., 7:1396–1409, 1996.
24
K. A. Flack, J. R. Saylor, and G. B. Smith. Near-surface turbulence for evaporative convection at
an air/water interface. Physics of Fluids, 13(11):529, 2001. 117
C. Garbe, H. Spies, and B. Jähne. Estimation of surface flow and net heat flux from infrared
image sequences. Journal of Mathematical Imaging and Vision, 19:159–174, 2003. 24, 48
C. F. Gauss. Theoria combinationis observationum erroribus minimis obnoxiae. Comment. Soc.
Reg. Sci. Gotten. Recent., 5:33–90, 1823. 131
S. Geman and D. E. McClure. Bayesian image analysis: An application to single photon emission
tomography. In Proc. Amer. Statist. Assoc. Statistical Computing Section, pp. 12–18, 1984. 45
G. H. Golub and C. F. van Loan, eds. Matrix computations. The John Hopkins University Press,
Baltimore and London, 1996. 133
G. H. Golub and van Loan C. F. An analysis of the total least squares problem. SIAM Journal on
Numerical Analysis, 17(6):883–893, 1980. 131
G. H. Granlund and K. H. Signal processing for computer vision. Kluwer, 1995. 36
F. J. Green. The Sigma-Aldrich handbook of stains, dyes and indicators. Aldrich Chemical
Company Inc., Milwaukee, Wisconsin, 1990. 62
A. W. Gruen. Adaptive least squares correlation: a powerful image matching technique. S Afr J
of Photogrammetry, Remote Sensing and Cartography, 14(3):175–187, 1985. 28
L. Gui and S. T. Wereley. A correlation based continuous window-shift technique to reduce the
peak-locking effect in digital PIV evaluation. Experiments in Fluids, 32:506–517, 2002. 30
R. Hain and C. J. Kähler. Dynamisches Auswerteverfahren für zeitaufgelöste PIV-Bildsequenzen.
In 12. Fachtagung “Lasermethoden in der Strömungsmesstechnik”. Universität Karlsruhe,
2004. 30
H. Haussecker, S. Reinelt, and B. Jähne. Heat as a proxy tracer for gas exchange measurements
in the field: principles and technical realization. In Selected Papers from the 3rd International
Symposium on Air-Water Gas Transfer, pp. 405–413, 1995. 21
H. W. Haussecker and D. J. Fleet. Computing optical flow with physical models of brightness
variation. PAMI, 23(6):661–673, 2001. 46
143
Bibliography
H. Helmholtz. Über die Integrale der hydrodynamischen Gleichungen, welche den Wirbelbewegungen entsprechen. Crelles J., 55:25, 1858. 6
F. Hering. Lagrangesche Untersuchungen des Strömungsfeldes unterhalb der wellenbewegten
Wasseroberfläche mittels Bildfolgenanalyse. PhD thesis, University of Heidelberg, 1996. 31,
94
F. Hering, M. Merle, D. Wierzimok, and B. Jähne. A robust technique for tracking particles
over long image sequences. In Proc. of ISPRS Intercommission Workshop ’From Pixels to
Sequences’, Int’l Arch. of Photog. and Rem. Sens., 1995a. 82
F. Hering, D. Wierzimok, and B. Jähne. Measurements of enhanced turbulence in short windinduced water waves. In Selected Papers from the 3rd International Symposium on Air-Water
Gas Transfer, pp. 125–134, 1995b. 21
R. Higbie. The rate of absorption of a pure gas into a still liquid during short periods of exposure.
Trans. Am. Inst. Chem. Eng., 31:365–389, 1935. 13
W. J. Hiller, S. Koch, T. A. Kowalewski, and F. Stella. Onset of natural convection in a cube. Int.
J. Heat Mass Transfer, 13:3251–3263, 1993. 24
K. D. Hinsch. Holographic particle image velocimetry. Meas. Sci. Technol., 13:R61–R72, 2002.
30
A. Honkan and Andreopoulos. Vorticity, strain-rate and dissipation characteristics in the nearwall region of turbulent boundary layers. J. Fluid Mech., 350:29–96, 1997. 26
B. K. P. Horn and B. G. Schunk. Determining optical flow. Artificial Intelligence, 17:185–204,
1981. 35, 39, 47
H. T. Huang, H. F. Fielder, and J. J. Wang. Limitation and improvement of PIV, part I. Limitation
of conventional techniques due to deformation of particle image patterns. Experiments in
Fluids, 15:168–174, 1993a. 30
H. T. Huang, H. F. Fielder, and J. J. Wang. Limitation and improvement of PIV, part II. Particle
image distortion, a novel technique. Experiments in Fluids, 15:263–273, 1993b. 30
IPCC. Third assessment report: Climate change. Technical report, International Panel on Climate
Change, 2001. 1
B. Jähne. Digital image processing. Springer-Verlag, Berlin, Heidelberg, 5th edn., 2002. 36, 41,
80, 87
B. Jähne, H. Haussecker, and P. Geissler, eds. Handbook of computer vision and applications.
Academic Press, San Diego, CA, USA, 1999. 36, 37, 38, 42, 44, 45, 46
M. Jehle and B. Jähne. Direct estimation of the wall shear rate using parametric motion models
in 3D. In Pattern Recognition, 28th DAGM, 2006a. 3
144
Bibliography
M. Jehle and B. Jähne. Eine neuartige Methode zur raumzeitlichen Analyse von Strömungen in
Grenzschichten. In Frühjahrstagung 2006 der Deutschen Physikalischen Gesellschaft. Heidelberg, Germany, 2006b. 3
M. Jehle and B. Jähne. A novel method for spatiotemporal analysis of flows within the water-side
viscous boundary layer. In 12th International Symposium of Flow Visualisation. Göttingen,
Germany, 2006c. 3
M. Jehle, M. Klar, and B. Jähne. Optical-Flow based velocity analysis. In C. Tropea, J. Foss,
and A. Yarin, eds., Springer Handbook of Experimental Fluid Dynamics. Springer, Berlin,
Heidelberg, in preparation. 4
M. Jehle, M. Klar, H.-J. Köhler, and M. Heibaum. Bewegungsdetektion und Geschwindigkeitsanalyse in Bildfolgen zur Untersuchung von Sedimentverlagerungen. In Mitteilungen des
Instituts für Grundbau und Bodenmechanik, vol. 77, pp. 371–392. Braunschweig, Germany,
2004. 48
B. Jähne. Transfer processes across the free water surface. Professorial dissertation, University
of Heidelberg, 1985. 2
B. Jähne, P. Libner, R. Fischer, T. Billen, and E. Plate. Investigating the transfer processes across
the free aqueous viscous boundary layer by the controlled flux method. Tellus, 41B:177–195,
1989. 21
B. Jähne, K. O. Münnich, R. Bösinger, A. Dutzi, W. Huber, and P. Libner. On the parameters
influencing air-water gas exchange. J. Geophys. Res., 92:1937–1949, 1987. 19, 21
B. Jähne, M. Schmidt, and R. Rocholz. Combined optical slope/height measurements of short
wind waves: principles and calibration. Meas. Sci. Technol., 16:1937–1944, 2005. 17
K. Kanatani. Group-theoretical methods in image understanding. Springer Verlag, Berlin, Heidelberg, New York, 1990. 137
R. Keane, R. J. Adrian, and Y. Zhang. Super-resolution particle imaging velocimetry. Meas. Sci.
Technol., 6:754–768, 1995. 33
C. Kähler and J. Kompenhans. Fundamentals of multiple plane stereo particle image velocimetry.
Experiments in Fluids, 29:70–77, 2000. 30
M. Klar. Design of an endoscopic 3-D Particle-Tracking Velocimetry system and its application
in flow measurements within a gravel layer. Diss., Univ. Heidelberg, 2005. 31, 32
H. Knutsson and C.-F. Westin. Normalized and diffrential convolution. In IEEE Conf. Computer
Vision & Pattern Recognition, pp. 515–523, 1993. 93
E. L. Koschmieder. Bénard Cells and Taylor Vortices. Cambridge University Press, 1993. 111
E. B. Kraus and J. A. Businger. Atmosphere-ocean interaction. Oxford University Press, 1994.
17
145
Bibliography
P. K. Kundu. Fluid mechanics. Academic Press, San Diego, USA, 1990. 8, 13, 111, 113
P. Lancaster and K. Salkauskas. Curve and surface fitting. An introduction. Acadamic Press,
1986. 93
R. Larsen. Estimation of dense image flow fields in fluids. IEEE Transactions on Geoscience
and Remote Sensing, 36(1):256–264, January 1998. 41, 48
C. Le Quéré, O. Aumont, L. Bopp, P. Pousquet, P. Ciais, R. Francey, M. Heimann, C. D. Keeling,
R. F. Keeling, H. Kheshgi, P. Peylin, S. C. Piper, I. C. Prentice, and P. J. Rayner. Two decades
of ocean CO2 sink and variability. Tellus, 55B:649–656, 2003. 1
C. Leue, P. Geissler, F. Hering, and B. Jähne. Segmentierung von Partikelbildern in der Strömungsvisualisierung. In Proceedings of 18th DAGM-Symposium Mustererkennung, Informatik
Aktuell, pp. 118–129, 1996. 83, 84
P. S. Liss and P. G. Slater. Flux of gases across the air-sea interface. Nature, 247:181–184, 1974.
12
C. G. Lomas. Fundamentals of hot wire anenometry. Cambridge University Press, Cambridge,
London, New York, Melbourne, 1986. 26
M. S. Longuet-Higgins. Capillary rollers and bores. J. Fluid Mech., 240:659–679, 1992. 19
B. Lucas and T. Kanade. An iterative image registration technique with an application to stereo
vision. In DARPA Image Understanding Workshop, pp. 121–130, 1981. 38, 41, 131
H. G. Maas, A. Gruen, and D. Papantoniou. Particle tracking velocimetry in three-dimensional
flows, Part I: Photogrammetric determination of particle coordinates. Experiments in Fluids,
15:133–146, 1993. 32
N. A. Malik, T. Dracos, and D. Papantoniou. Particle tracking velocimetry in three-dimensional
flows, Part II: Particle tracking. Experiments in Fluids, 15:279–294, 1993. 32
M. Marxen, P. E. Sullivan, M. R. Loewen, and B. Jähne. Comparison of Gaussian particle center
estimators and the achievable measurement density for particle tracking velocimetry. Experiments in Fluids, 29:145–153, 2000. 85
R. Mayer, R. A. W. M. Henkes, and J. L. van Ingen. Quantitative infrared-thermography for
wall-shear stress measurements in laminar flow. Int. Journal of Heat and Mass Transfer, 41
(15):2347–2356, 1998. 25
S. P. McKenna and W. R. McGillis. Performance of digital image velocimetry processing techniques. Experiments in Fluids, 32:2, 2002. 32
B. G. McLachlan and J. H. Bell. Pressure-sensitive paint in aerodynamic testing. Experimental
Thermal and Fluid Science, 10:470–485, 1995. 24
W. K. Melville, R. Shear, and F. Veron. Laboratory measurements of the generation and evolution
of Langmuir circulations. Journal of Fluid Mechanics, 364:31–58, 1998. 19, 20
146
Bibliography
M. Mühlich and R. Mester. Subspace methods and equilibration in computer vision. Technical
report, Institute for Applied Physics, Goethe-Universität, Frankfurt, Germany, 1999. 133
G. Mie. Beiträge zur Optik trüber Medien, speziell kolloidaler Metallösungen. Ann. Physik, 25:
377–445, 1908. 59
D. Monson, G. Mateer, and F. Menter. Boundary-layer transition and global skin friction measurement with an oil-fringe imaging technique. SAE Tech Paper Series, 932550, 1993. 25
S. Nakamura, T. Mukai, and M. Senoh. Candela-class high-brightness InGaN/AlGaN doubleheterostructure blue-light-emitting diodes. Appl. Phys. Lett., 64:1687–1689, 1994. 63
A. Naya, E. Trucco, and N. E. Thacker. When are simple LS estimators enough? An empirical
study of LS, TLS, and GTLS. Intern. J. Comput. Vis., 68(2):203–216, 2006. 131
W. Nitsche. Strömungsmesstechnik. Springer, Berlin, Heidelberg, 1994. 24
K. Okuda. Internal flow structure of short wind waves, part I-IV. Journal of the Oceanographical
Society of Japan, 38, 40, 1982. 19, 20, 21, 22
P. J. Olver. Applications of Lie groups to differential equations. Springer Verlag, Berlin, Heidelberg, New York, 1986. 137
W. L. Peirson and M. L. Banner. Aqueous surface layer flows induced by microscale breaking
wind waves. Journal of Fluid Mechanics, 479:1–38, 2003. 21, 22
R. J. Perkins and J. C. R. Hunt. Particle tracking in turbulent flows. In Advances in Turbulence
2, pp. 286–291. Springer, 1989. 31
A. E. Perry. Hot-wire-anenometry. Oxford university press, Oxford, New York, 1982. 26
O. M. Phillips. The dynamics of the upper ocean. Cambridge University Press, 1969. 17
W. H. Press, S. A. Teukolsky, W. Vetterling, and B. Flannery. Numerical recipes in C: The art of
scientific computing. Cambridge University Press, New York, 1992. 43, 85
M. Raffel, C. Willert, and J. Kompenhans. Particle Image Velocimetry. A practical guide.
Springer, Berlin, Heidelberg, New York, 1998. 28, 29
L. Rayleigh. On convection currents in a horizontal layer of fluid, when the higher temperature
is on the under side. Phil. Mag, 32:529, 1916. 113
H. Reichardt. Vollständige Darstellung der turbulenten Geschwindigkeitsverteilung in glatten
Leitungen. Z. angewandt. Math. und Mech., 31:208–219, 1951. 16
B. Ristic, S. Arulampalam, and N. Gordon. Particle filters for tracking applications. Artech
House, 2004. 31
R. Rocholz. Bildgebendes System zur simultanen Neigungs- und Höhenmessung an kleinskaligen Wind-Wasserwellen. Diploma thesis, University of Heidelberg, 2005. 70
147
Bibliography
B. Ruck. Color-coded tomography. In Proc. 7th Int. Symp. on Fluid Control, Measurement and
Visualization, pp. 25–28, 2003. 31
P. Ruhnau, C. Gütter, and C. Schnörr. A variational aproach for particle tracking velocimetry.
Meas. Sci. Technol., 16(7):1449–1458, 2005a. 31
P. Ruhnau, T. Kohlberger, C. Schnörr, and H. Nobach. Variational optical flow estimation for
particle image velocimetry. Experiments in Fluids, 38:21–32, 2005b. 45, 47
P. Ruhnau and C. Schnörr. Optical Stokes flow. In 13th International Symposium. Applications
of Laser Techniques to Fluid Mechanics, Lisbon, Portugal, 2006. 48
P. Ruhnau, A. Stahl, and C. Schnörr. On-line variational estimation of dynamical fluid flows
with physics-based spatio-temporal regularization. In Pattern Recognition, Proc. 26th DAGM
Symposium, 2006. 48
C. L. Sabine, R. A. Feely, N. Gruber, R. M. Key, K. Lee, J. L. Bullister, R. Wanninkhof, C. S.
Wong, D. W. R. Wallace, B. Tilbrook, F. J. Millero, T.-H. Peng, A. Kozyr, T. Ono, and A. F.
Rios. The ocean sink for anthropogenic CO2 . Science, 305:367–372, 2004. 1
H. Scharr. Optimal operators in digital image processing. PhD thesis, University of Heidelberg,
2000. 42
A. Schimpf, S. Kallweit, and J. B. Richon. Photogrammetric Particle Image Velocimetry. In
Proc. 5th Int. Symp. on Particle Image Velocimetry, 2003. 30
H. Schlichting. Boundary layer theory. McGraw Hill, New York, 1960. 17
C. Schlosser. Messung von Diffusionskonstanten. Diploma thesis, University of Heidelberg,
2004. 99
M. Siddiqui, M. R. Loewen, W. E. Asher, and A. T. Jessup. Coherent structures beneath wind
waves and their influence on air-water gas transfer. J. Geophys. Res., 109:C03024, 2004. 22
M. Siddiqui, M. R. Loewen, C. Richardson, W. E. Asher, and A. T. Jessup. Simultaneous particle
image velocimetry and infrared imagery of microscale breaking waves. Physics of Fluids, 13
(7):1891–1903, 2001. 21
E. P. Simoncelli. Distributed representation and analysis of visual motion. Phd thesis, MIT, 1993.
36, 44
H. Spies. Range flow estimation: The movement of deformable surfaces. PhD thesis, University
of Heidelberg, 2001. 87
M. Stanislas, M. Okamoto, and C. Kähler. Main results of the First International PIV Challenge.
Meas. Sci. Technol., 14:63–89, 2003. 32
M. Stanislas, M. Okamoto, C. Kähler, and J. Westerweel. Main results of the Second International
PIV Challenge. Exp. Fluids., 39(2):170–191, 2005. 32
148
Bibliography
G. G. Stokes. On the theory of oscillatory waves. Trans. Camb. Philos. Soc., 8:441, 1847. 18
C. Tropea. Laser Doppler anemometry: recent developments and future challenges. Meas. Sci.
Technol., 6:605–619, 1995. 27
C. Tropea, J. Foss, and A. Yarin. Springer Handbook of Experimental Fluid Dynamics. Springer,
Berlin, Heidelberg, in preparation. 24
H. C. van de Hulst, ed. Light scattering by small particles. Dover Publications, New York, 1981.
59
S. van Huffel and J. Vandewalle. The total least squares problem: Computational aspects and
analysis. SIAM (Society for Industrial and Applied Mathematics), Philadelphia, 1991. 131
F. Veron and W. K. Melville. Experiments on the stability and transition of wind-driven water
surfaces. J. Fluid Mech., 446:25–65, 2001. 19
R. J. Volino and G. B. Smith. Use of simultaneous IR temperature measurements and DPIV to
investigate thermal plumes in a thick layer cooled from above. Experiments in Fluids, 27:
70–78, 1999. 117
G. Voulgaris and J. H. Trowbridge. Evaluation of the accoustic velocimeter (ADV) for turbulence
measurements. Journal of Atmospheric and Oceanic Technology, 15(1):272–289, 1998. 28
G. Welch and G. Bishop. An introduction to the Kalman-filter. Technical Report TR 95-041,
University of North Carolina at Chapel Hill, Department of Computer Science, 1995. 31
J. Westerweel. Digital Particle Image Velocimetry - Theory and application. Delft University
Press, Delft, 1993. 28
J. Westerweel, D. Dabiri, and M. Gharib. The effect of a discrete window offset on the accuracy
of cross-correlation analysis of digital PIV recordings. Experiments in Fluids, 23:20–28, 1997.
30
R. Wildes, M. Amabile, A.-M. Lanziletto, and T.-S. Leu. Recovering estimates of fluid flows
from image sequence data. Computer Vision and Image Understanding, 80:246–266, 2000.
38, 48
C. E. Willert and M. Gharib. Three-dimensional particle imaging with a single camera. Experiments in Fluids, 12:353–358, 1992. 32
J. Wolfrum, T. Dreier, V. Ebert, and C. Schulz. Laser-based combustion diagnostics. In R. A.
Meyers, ed., Encyclopedia of Analytical Chemistry, pp. 2118–2148. John Wiley and Sons Ltd.,
Chinchester, 2000. 24
W. Woodward and G. Appell. Current velocity measurements using acoustic Doppler backscatter:
A review. IEEE Journal of Oceanic Engineering, 11(1):3–6, 1986. 28
C. J. Zappa, W. E. Asher, A. T. Jessup, J. Klinke, and S. R. Long. Microbreaking and the
enhancement of air-water transfer velocity. J. Geophys. Res., 109:C08S16, 2004. 21
149
Bibliography
150
Danksagung
Die Durchführung dieser Arbeit wäre ohne die Unterstützung zahlreicher Menschen und Institutionen nicht möglich gewesen.
An erster Stelle danke ich Herrn Prof. Dr. Bernd Jähne für die Möglichkeit, diese Arbeit in
seiner Arbeitsgruppe am Interdisziplinären Zentrum für Wissenschaftliches Rechnen und am Institut für Umweltphysik durchzuführen. Bei ihm bedanke ich mich für die vielen Gespräche
und für die zahlreichen Impulse, die meine Arbeit so interessant werden ließen. Herrn Prof. Dr.
Christoph Cremer gilt mein Dank für das mir entgegengebrachte Interesse und für die Übernahme
des Zweitgutachtens.
Bei der Deutschen Forschungsgemeinschaft möchte ich mich für die Finanzierung dieser Arbeit bedanken und für die Gelegenheit, im Rahmen des Schwerpunktprogramms ’Bildgebende
Messverfahren für die Strömungsanalyse’ viele interessante Kontakte zu anderen Arbeitsgruppen
herstellen zu können. Insbesondere gilt mein Dank einigen Mitarbeiter des Labors für Biofluidmechanik der Charité in Berlin, namentlich André Berthe, Bastian Blümel, Dr. Perrine Debaene
und Dr. Ulrich Kertzscher.
Hans-Jürgen Köhler und Dr. Thomas Wenka möchte ich für die gute Kooperation im Rahmen des BAW-Projektes danken. Besonderen Dank möchte ich Dr. Michael Klar aussprechen,
der mich mit Rat und Tat in die Gebiete der Bildverarbeitung und der Strömungsmesstechnik
einführte.
Bei den Mitarbeitern der IUP-Werkstatt und der Zentralwerkstatt möchte ich mich für ihre
tatkräftige Hilfe bedanken.
Großen Anteil am Gelingen dieser Arbeit haben meine Mitstreiter und -arbeiter der Arbeitsgruppen Digitale und Multidimensionale Bildverarbeitung. Für das Korrekturlesen möchte ich
mich bei Dr. Günther Balschbach, Dr. Kai Degreif, Pavel Pavlov, Roland Rocholz und Martin
Schmidt bedanken. Bei Dr. Christoph Garbe bedanke ich mich für die Durchführung von thermographischen Messungen an der Rieselfilmrinne.
Während meiner Zeit als Computerbeauftragter war es mir von großem Nutzen, auf die Erfahrung von Michael Kelm, Marc Kirchner und Dr. Stefan Kraus zurückgreifen zu dürfen, bei
denen ich mich herzlich bedanken möchte.
Großen Dank schulde ich all meinen Freunden, insbesondere Stefanie Hoffmann, für die moralische Unterstützung bei meiner Arbeit. Zu guter Letzt möchte ich mich bei meinen Eltern für das
entgegengebrachte Vertrauen und die Unterstützung während meines Studiums in Karlsruhe und
meiner Promotionszeit in Heidelberg bedanken.
Was this manual useful for you? yes no
Thank you for your participation!

* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project

Download PDF

advertisement