null  null
BV0070-11_TR_Cover2005.qxp
21-03-2005
13:29
Page 2
TECHNICAL REVIEW
BV 0057 – 11
ISSN 0007–2621
Surface Microphone
NAH and Beamforming using the same Array
SONAH
No.1 2005
Previously issued numbers of
Brüel & Kjær Technical Review
1 – 2004 Beamforming
1 – 2002 A New Design Principle for Triaxial Piezoelectric Accelerometers
Use of FE Models in the Optimisation of Accelerometer Designs
System for Measurement of Microphone Distortion and Linearity from
Medium to Very High Levels
1 – 2001 The Influence of Environmental Conditions on the Pressure Sensitivity of
Measurement Microphones
Reduction of Heat Conduction Error in Microphone Pressure Reciprocity
Calibration
Frequency Response for Measurement Microphones – a Question of
Confidence
Measurement of Microphone Random-incidence and Pressure-field
Responses and Determination of their Uncertainties
1 – 2000 Non-stationary STSF
1 – 1999 Characteristics of the vold-Kalman Order Tracking Filter
1 – 1998 Danish Primary Laboratory of Acoustics (DPLA) as Part of the National
Metrology Organisation
Pressure Reciprocity Calibration – Instrumentation, Results and Uncertainty
MP.EXE, a Calculation Program for Pressure Reciprocity Calibration of
Microphones
1 – 1997 A New Design Principle for Triaxial Piezoelectric Accelerometers
A Simple QC Test for Knock Sensors
Torsional Operational Deflection Shapes (TODS) Measurements
2 – 1996 Non-stationary Signal Analysis using Wavelet Transform, Short-time
Fourier Transform and Wigner-Ville Distribution
1 – 1996 Calibration Uncertainties & Distortion of Microphones.
Wide Band Intensity Probe. Accelerometer Mounted Resonance Test
2 – 1995 Order Tracking Analysis
1 – 1995 Use of Spatial Transformation of Sound Fields (STSF) Techniques in the
Automative Industry
2 – 1994 The use of Impulse Response Function for Modal Parameter Estimation
Complex Modulus and Damping Measurements using Resonant and Nonresonant Methods (Damping Part II)
1 – 1994 Digital Filter Techniques vs. FFT Techniques for Damping Measurements
(Damping Part I)
2 – 1990 Optical Filters and their Use with the Type 1302 & Type 1306 Photoacoustic
Gas Monitors
1 – 1990 The Brüel & Kjær Photoacoustic Transducer System and its Physical
Properties
2 – 1989 STSF — Practical Instrumentation and Application
Digital Filter Analysis: Real-time and Non Real-time Performance
(Continued on cover page 3)
Technical
Review
No. 1 – 2005
Contents
Acoustical Solutions in the Design of a Measurement Microphone for Surface
Mounting................................................................................................................. 1
Erling Sandermann Olsen
Combined NAH and Beamforming Using the Same Array ............................... 11
J. Hald
Patch Near–field Acoustical Holography Using a New Statistically Optimal
Method ................................................................................................................ 40
J. Hald
TRADEMARKS
Falcon Range is a registered trademark of Brüel & Kjær Sound & Vibration Measurement A/S
PULSE is a trademark of Brüel & Kjær Sound & Vibration Measurement A/S
Copyright © 2005, Brüel & Kjær Sound & Vibration Measurement A/S
All rights reserved. No part of this publication may be reproduced or distributed in any form, or by any
means, without prior written permission of the publishers. For details, contact: Brüel & Kjær
Sound & Vibration Measurement A/S, DK-2850 Nærum, Denmark.
Editor: Harry K. Zaveri
Acoustical Solutions in the Design of a
Measurement Microphone for Surface
Mounting
Erling Sandermann Olsen
Abstract
This article describes the challenges encountered, and the solutions found, in the
design of surface microphones for measurement of sound pressure on the surfaces
of aircraft and cars. Given the microphone’s outer dimensions, the optimum rear
cavity shape should be found, together with the best possible pressure equalization
solution. The microphone’s surface should be smooth so as to avoid wind-generated noise. Since the microphones are intended to be used on the surface of aircraft
and cars, they must work in a well documented way in a temperature range from
–55°C up to +100°C and in a static pressure range from one atmosphere down to
one or two tenths of an atmosphere. The static pressure even changes with position
on the surface of aircraft and cars due to the aerodynamically generated pressure.
Résumé
Cet article traite des difficultés qu’il a fallu surmonter lors de la conception des
microphones de surface utilisés pour les mesures de pression dynamique à la surface des automobiles et des aéronefs, et des solutions qui ont été adoptées. Du fait
des cotes extérieures minuscules du microphone, il fallait trouver la forme optimale
pour la cavité arrière et la meilleure solution possible pour l’égalisation de pression. Il fallait aussi que la surface soit suffisamment lisse pour éviter tout bruit
généré par le vent. Comme ces capteurs sont destinés à des mesures sur les automobiles et les avions, il doivent fonctionner de manière parfaitement documentée dans
une gamme de température comprise entre –55°C et +100°C et une gamme de
pression statique comprise entre un ou deux dixièmes d’atmosphère et une atmosphère. La pression statique varie par ailleurs en fonction du positionnement sur la
surface de l’engin, sous l’effet de la pression aérodynamique.
1
Zusammenfassung
Dieser Artikel beschreibt Problemstellungen und Lösungen bei der Konstruktion
von Oberflächenmikrofonen für Schalldruckmessungen auf der Oberfläche von
Flugzeugen und Autos. Bei gegebenen Außenabmessungen des Mikrofons sollte
die optimale Form für den rückwärtigen Hohlraum gefunden werden, sowie die am
besten geeignete Lösung für die Druckausgleichsöffnung. Die Oberfläche des Mikrofons sollte möglichst glatt sein, um Windgeräusche zu vermeiden. Da die Mikrofone an der Außenfläche von Flugzeugen und Autos eingesetzt werden sollen,
müssen sie im Temperaturbereich von –55°C bis +100°C und bei statischen Luftdrücken von 1 atm bis hinab zu 0,1 oder 0,2 atm in dokumentierter Weise arbeiten.
Der aerodynamisch erzeugte Druck bewirkt überdies, dass sich der statische Druck
mit der Position auf der Oberfläche von Flugzeugen und Autos ändert.
Introduction
At the end of the year 2000, a large-scale aircraft manufacturer approached
Brüel & Kjær to find out if we could design and produce a measurement microphone capable of being mounted on aircraft surfaces. At that time, the microphone
group in Brüel & Kjær’s R&D department was looking at new ways to produce
measurement condenser microphones. If successful, this new production technique
would allow us to produce the required flat microphone design, and it was therefore
decided to carry on with the development. The microphone should be a 5 Hz –
20 kHz pressure field microphone of normal measurement microphone quality, but
not more than 2.5 mm in height. It should not interfere with the airflow over aircraft
wings and it should work under normal conditions for aircraft surfaces, including
large temperature and static pressure variations, de-icing, etc.
Two microphone types have been developed, Brüel & Kjær Surface Microphone
Type 4948 and Type 4949. Both are pressure field measurement microphones with
built-in preamplifier, 20 mm in diameter and 2.5 mm high. The intention of this
article is to present some of the challenges in the acoustic design of the microphones and the solutions to the challenges.
Considerations and Calculations
Influence of Changes in Static Pressure
Changes in the static pressure influence the microphone in two ways:
2
First, since a part of the stiffness in the diaphragm system of the microphone is
due to the mechanical stiffness of the air in the cavity behind the diaphragm, and
since the stiffness of the air is proportional to the static pressure [1], the sensitivity
depends on the static pressure. The static pressure dependency at low frequencies,
expressed in dB/kPa, is given by:
cd γ x
d
d
1 V eq,d x
S ps,dB = ------ 20 log ⎛ 1 – ------------------------- ⎞ ≈ ------ 20 log ⎛ 1 – ----- ------------------------ ⎞
⎝
⎝
dx
p s V eq,d + V c ⎠
c d γp s + V c ⎠ dx
1 V eq,d
≈ – 8,686 ----- -----------------------p s V eq,d + V c
(1)
where x is the change in pressure, cd is the mechanical compliance of the diaphragm, c is the ratio of specific heats of air, ps is static pressure, Vc is the volume
of the cavity and Veq,d is the equivalent volume at 1 atmosphere of the diaphragm
compliance.
Second, if the static pressure is different outside and inside the microphone, a
static force will displace the average position of the diaphragm, and thus change the
response of the microphone. Therefore, the cavity must be vented. The vent will
have a certain cut-off frequency. Below the cut-off frequency, the microphone is
insensitive to pressure variations. Above the cut-off frequency the microphone
works as intended. Assuming that the vent is a narrow tube between the cavity and
the surroundings and ignoring the influence of heat conduction at the boundaries,
the cut-off frequency [1] is given by:
4
1 cd + cc
1 γp s c d + V c
1 πa γp s c d + V c
f NG = ------ ----------------- = ------ ------------------------- = ------ --------- ------------------------2π R v c d c c
2π R v c d V c
2π 8 η l c d V c
(2)
where cc is the mechanical compliance of the cavity, Rv is acoustic resistance of
the vent, g is the coefficient of viscosity of air and a and l are radius and length of
the vent. The dimensions of the tube have to be small. If a cut-off frequency of
5 Hz is desired, the radius must be around 30 m for the dimensions of the surface
microphone.
From the expressions it can be seen that the larger the volume the less dependence on static pressure of the response and the cut-off frequency.
3
Resonances in the Microphone Cavity
From the previous section, it is clear that the volume of the air cavity must be as
large as possible. On the other hand, due to the speed of sound and because the
acoustic mass of the air is high in narrow slits [1], resonances will be present if
some parts of the volume are too large or too narrow.
In the first prototypes of the microphone all internal volume of the microphone
was included in the microphone cavity so as to maximize the cavity volume. All
openings between the three internal volumes were as large as possible within the
given dimensions. The geometry of the cavity is shown in Fig. 1.
Fig. 1. Sketch of the cross section of the prototype design with maximal internal volume. The
maximum internal diameter is approximately 17.5 mm
Diaphragm
Backplate
Cavity
Inner
ring
Inner cavity
Outer ring
050006
Classical calculations [1] considering the entire cavity and the inner cavity as
being simple cylindrical cavities and considering the perimeter length of the ring
volumes showed that non-axisymmetrical (cross-sectional) modes could be
expected at frequencies down to around 7.5 kHz whereas axisymmetrical modes
could be expected at frequencies not lower than 24 kHz. Considering the relatively
narrow openings between the internal volumes, Helmholtz-like resonances could
be expected. It is difficult to identify exactly what parts of the cavity belong to the
mass, but a resonance could be expected in the range 7 kHz to 15 kHz.
Some experiments with the prototype showed that a marked resonance was
present at a frequency around 17 kHz, see curve a) in Fig. 4. The dip in the frequency response most probably occurs because a resonance with pressure maximum at the rear of the diaphragm is present. No significant influence of resonances
seems to be present at lower frequencies. That is, there was no immediate relation
between the estimated resonance frequencies and what was measured. Furthermore, experience from the development of Brüel & Kjær Sound Intensity Calibrator Type 4297 clearly showed that simple acoustical considerations do not
necessarily lead to the right conclusions on the acoustics in narrow and complicated cavities [2]. Therefore, it was decided to use an axisymmetrical model for
4
Boundary Element Model (BEM) calculations of the acoustics in the microphone
cavity.
The BEM method used for the calculations was the direct collocation method in
a formulation for axisymmetric bodies [3] with an improved calculation method for
near-singular integration [4] and using a cosine expansion of the acoustical variables in order to calculate non-axisymmetric sound fields [5]. No losses were taken
into consideration in the calculations. Since the condition number of the coefficient
matrix of the model presents maxima at eigenfrequencies [6], it was calculated and
plotted in order to identify the eigenmodes of the microphone cavity. The method
was the same as used for the calculations for Intensity Calibrator Type 4297.
Plots of the condition number as a function of frequency are shown in Fig. 2 for
the cavity configurations mentioned in this article. Measured responses for similar
configurations are shown in Fig. 4.
Fig. 2. Condition number plots for the first four terms in the cosine expansion of the sound
fields in the three cavity configurations mentioned in the text:
a) all three volumes included; b) blocked between inner volume and ring volumes; c) outer
ring volume blocked
—— : m = 0, axisymmetrical modes
– – – –: m = 1, one nodeline
· · · · : m = 2, two nodelines
– · – · –: m = 2, three nodelines
050111
5
In the first prototype, a) in Fig. 2, a cross-sectional mode is present at around
11.2 KHz and the lowest axisymmetrical mode is present at around 16 kHz. Comparing with the measured response, a) in Fig. 4, the cross sectional mode does not
present itself whereas the axisymmetrical mode creates a large dip in the frequency
response. The axisymmetrical mode is clearly the expected resonance between the
mass of the air in the narrow part and the volumes of the cavity. This can be seen in
Fig. 3 where the phase of the sound pressure is shown. The sound pressure in the
outer ring is in counterphase with the rest of the sound field in the cavity.
Fig. 3. Phase at 16 kHz of the calculated sound field in the cavity with all three volumes
included. The frequency is that of the axisymmetrical mode identified in a) of Fig. 2
050007
It may be possible to remove the axisymmetrical mode from the frequency range
of the microphone without reducing the total cavity volume by blocking the narrow
part of the cavity between the inner volume and the ring volumes. The condition
numbers are shown for this situation in b) of Fig. 2. As compared to the first example, the lowest cross-sectional mode has moved down in frequency whereas the
lowest axisymmetrical mode has moved up in frequency, to just above the microphone’s frequency range. This situation is also shown in curve b) in Fig. 4.
Although the axisymmetrical eigenmode and thus the dip in the frequency response
did move up in frequency, the solution was not found to be satisfactory. This was
partly because the frequency of the dip was just outside and not far away from the
microphone’s frequency range, and partly because blocking at that position would
be impractical from a manufacturing point of view.
Reducing the size of the outer ring volume was not an option for practical reasons. Instead, it was decided to block the outer ring volume and to investigate how
6
Fig. 4. Frequency responses of prototypes with three cavity configurations mentioned in the
text: a) all three volumes included; b) blocked between inner volume and ring volumes;
c) outer ring volume blocked
050112
large the active volume could be made without having axisymmetrical modes too
close to the microphone’s active frequency range. In c) of Fig. 2 condition numbers
are shown for a calculation where the narrow part of the gap is blocked on the outside of the openings to the inner ring volume. That is, in these calculations the volume below the backplate and the inner ring volume are coupled through the narrow
part of the cavity. Now, the lowest axisymmetrical mode is around 25 kHz, reasonably well above the frequency range of interest. The lowest cross-sectional mode is
around 17 kHz. The response of a later prototype with a similar configuration is
shown as c) in Fig. 4.
This calculation showed that although the rear cavity had to be substantially
smaller than the total internal volume of the microphone, the inner ring volume and
some of the narrow sections could still be included in the rear cavity. The resulting
volume of the rear cavity is around 90 mm3. With the equivalent volume of the diaphragm compliance of around 7.5 mm3, the resulting static pressure dependency of
the microphone is around –0.007 dB/kPa. This is within the range normally seen
for normal ½″ measurement microphones [7] and it was found acceptable for the
microphone. However, if versions of the microphone have to be made with diaphragms with higher compliance, the consequences for the static pressure dependency must of course be considered.
Static Pressure Equalization
As previously mentioned, a narrow tube for venting of the microphone’s internal
cavity is necessary for the purpose of static pressure equalization. There are some
important restrictions to the pressure equalization vent of the microphone – these
7
form the basis of this article. Since the microphone is intended for surface mounting, the surface which will be mounted flush is likely to be the only part of the
microphone housing that is exposed to the same static pressure as the diaphragm.
Furthermore, in the presence of airflow, the static pressure can vary over the surface due to aerodynamic lift forces and turbulence. Therefore, the pressure equalization should represent the average static pressure on the surface of the diaphragm
as closely as possible.An ingenious pressure equalization solution has been found.
A groove is incorporated around the entire diaphragm perimeter and the narrow
equalization tube is connected to the bottom of the groove. The pressure equalization system is illustrated in Fig. 5.
Fig. 5. Sketch showing the principle of the static pressure equalization
Groove
Vent
050008
The groove is wide enough to prevent any significant acoustical resistance, and
yet it is narrow enough to dampen standing waves and not allow water to penetrate
into the groove (under normal circumstances). Of course, in the case of the microphone icing-over, or if the surface of the microphone is temporarily immersed in
water, the pressure equalization system will be inoperative, but as long as the
groove is open, the equalization system has proven to work as intended in realistic
situations as well as in tests.
The pressure equalization solution did present some engineering challenges
related to the temperature range of the microphone. Due to changes in properties of
the materials in the microphone, it is difficult to maintain contacting surfaces and
sealings perfectly airtight at extremely low temperatures, especially because large
variations of the dimensions cannot be allowed for in the design. The solution
made has proven to work in the specified temperature range. At temperatures
below the operating temperature range of the microphone, however, an increase in
the lower limiting frequency of the microphone may occur due to minor leakages in
the microphone. At least for temperatures the microphone is likely to be exposed to
below the temperature range, this is a reversible effect. When the temperature
8
reverts to within the operating temperature range, the lower limiting frequency
reverts to its initial value.
Influence of Airflow
Obviously, a microphone for sound pressure measurements in the presence of
rapid airflow must not, by itself, produce any noise due to the airflow. The surface
of the microphone must be as flat as possible and have no recesses or obstacles
that can generate noise in the presence of the airflow. The surface of the microphone should also be flush with the surface it is mounted in. For applications on
aircraft surfaces the microphone must be embedded into the surface, whereas for
applications with more moderate wind speeds such as automobile surfaces, the
microphone does not necessarily have to be embedded in the surface in order to
avoid wind generated noise, as long as there are no sharp edges on the mounting.
The flatness of the microphone is achieved by welding the diaphragm directly on
top of its carrying surface. In this way, the diaphragm can be totally flush with the
microphone housing. In order to avoid accidental destruction of the diaphragm,
however, it is recessed a few hundredths of a millimeter relative to the rest of the
surface. The groove for pressure equalization is positioned just outside the welding, so using this method, the microphone is unlikely to have any impact on the airflow.
For applications where the microphone does not have to be embedded in the surface, different mounting flanges have been designed that form a smooth transition
from the surface of the microphone to the surrounding surface. These flanges are
made slightly flexible so that they can be mounted easily on moderately curved surfaces.
Conclusions
In this article the solutions to some acoustical challenges in the practical design of
a special microphone have been presented with special emphasis on the influence
of static pressure variations.
Axisymmetrical modes present in the rear cavity of a microphone have a significant impact on its response, whereas non-axisymmetrical modes seem to be less
significant. The cavity must be designed so that no axisymmetrical wave patterns
are present in the useful frequency range of the microphone.
The solutions for static pressure equalization and minimisation of wind-induced
noise have been presented. The combinations of classical calculations, BEM calcu-
9
lations and common sense considerations have led to a successful design of the
microphone for surface mounting.
Acknowledgements
The author wishes to thank all his colleagues at Brüel & Kjær who have participated in the development of the Surface Microphone, especially Niels Eirby,
Anders Eriksen, Johan Gramtorp, Jens Ole Gulløv, Bin Liu and Børge Nordstrand
of the microphone development department whose combined work has led to the
successful design of the Surface Microphone.
References
[1]
See, for example, Beranek L.L., “Acoustics”, Acoust. Soc. Am., Part XIII,
Acoustic Elements, (1996).
[2]
Olsen E.S., Cutanda V., Gramtorp J., Eriksen A., “Calculating the Sound
Field in an Acoustic Intensity Probe Calibrator – a Practical Utilization of
Boundary Element Modeling”, Proceedings of the 8th Internationational
Conference on Sound and Vibration, Hong Kong (2001).
[3]
Seybert A.F., Soenarko B., Rizzo F.J., Shippy D.J., “A Special Integral
Equation Formulation for Acoustic Radiation and Scattering for Axisymmetric Bodies and Boundary Conditions”, J. Acoust. Soc. Am., 80, 12411247 (1986).
[4]
Cutanda V., Juhl P.M., Jacobsen F., “On the Modeling of Narrow Gaps using
the Standard Boundary Element Method”, J. Acoust. Soc. Am., 109, 12961303 (2001).
[5]
Juhl P.M., “An Axisymmetric Integral Equation Formulation for Free Space
Non Axisymmetric Radiation and Scattering of a Known Incident Wave”,
J. Sound Vib., 163, 397-406 (1993).
[6]
Bai M.R., “Study of Acoustic Resonance in Enclosures using Eigenanalysis
Based on Boundary Element Methods”, J. Acoust. Soc. Am., 91, 2529-2538
(1992).
[7]
Brüel & Kjær, “Microphone Handbook for the Falcon Range® of Microphone Products”, Brüel & Kjær (1995).
10
Combined NAH and Beamforming Using the
Same Array
J. Hald
Abstract
This article deals with the problem of how to design a microphone array that performs well for measurements using both Nearfield Acoustical Holography (NAH)
and Beamforming (BF), as well as how to perform NAH processing on irregular
array measurements. NAH typically provides calibrated sound intensity maps,
while BF provides unscaled maps. The article also describes a method to perform
sound intensity scaling of the BF maps in such a way that area-integration provides a good estimate of the sub-area sound power. Results from a set of loudspeaker measurements are presented.
Résumé
Cet article traite de la difficulté de concevoir une antenne microphonique dédiée
tout à la fois aux mesures d’imagerie acoustique par holographie en champ proche
(NAH) et beamforming (BF) et au traitement NAH des mesures obtenues au
moyen d’antennes de géométrie irrégulière. La technique NAH procure typiquement une cartographie calibrée de l’intensité acoustique tandis que l’approche BF
fournit des cartes dépourvues d’échelle. Cet article décrit également une méthode
de mise à l’échelle des cartes d’intensité acoustique BF de telle manière que l’intégration surfacique conduise à une juste estimation de la puissance acoustique par
élément de surface. Avec une présentation des résultats d’un mesurage réalisé avec
un jeu de haut-parleurs.
Zusammenfassung
Dieser Artikel beschäftigt sich mit dem Problem, wie sich ein Mikrofonarray konstruieren lässt, das sich für Messungen mit akustischer Nahfeldholographie (NAH)
und Beamforming (BF) eignet, sowie damit, wie sich Messungen mit irregulären
Arrays für NAH aufbereiten lassen. NAH liefert in der Regel kalibrierte Schallintensitäts-Kartierungen, während BF nichtskalierte Kartierungen liefert. Der Artikel
11
beschreibt auch eine Methode zur Skalierung der Schallintensität auf BF-Kartierungen, bei der die Integration über die Fläche eine gute Abschätzung der Teilschallleistungen ergibt. Es werden Ergebnisse von einer Messserie an
Lautsprechern vorgestellt.
Introduction
Fig. 1 shows a rough comparison of the resolutions on the source plane – RBF
and RNAH – that can be obtained with Beamforming (BF) and with Near-field
Acoustical Holography (NAH), respectively.
Fig. 1. Resolution of Holography (NAH) and Beamforming (BF)
Log (Resolution)
Beamforming
RBF ~ (L/D)λ
Holography
RNAH ~ L
RNAH ~ ½ λ
Log (Frequency)
050080
The resolution is defined here as the smallest distance between two incoherent
monopoles of equal strength on the source plane that allows them to be separated
in a source map produced with the method under consideration. For Beamforming
the near-axial resolution is roughly:
L
R BF ≈ 1,22 ---- λ
D
(1)
where L is the measurement distance, D is the array diameter and k is the wavelength. (Also see reference [1] and the Appendix.) Basically, Beamforming performs a directional/angular resolution of the source distribution, which explains
why the resolution on the source plane is proportional to the measurement distance
12
L. Since typically, the focusing capabilities of Beamforming require that all array
microphones be exposed almost equally to any monopole on the source plane, the
measurement distance required is normally equal to, or greater than the array
diameter. As a consequence, the resolution cannot be better than one wavelength
(approximately), which is often not acceptable at low frequencies.
For NAH, the resolution RNAH is approximately half the wavelength at high frequencies, which is only a bit better than the resolution of Beamforming. But at low
frequencies it never gets poorer than approximately the measurement distance L.
By measuring very near the source using a measurement grid with a small grid
spacing, NAH can reconstruct part of the evanescent waves that decay exponentially away from the source, [2]. This explains the superior low-frequency resolution of NAH.
However, NAH requires a measurement grid with less than half wavelength
spacing at the highest frequency of interest, covering at least the full mapping area,
to build up a complete local model of the sound field. This requirement makes the
method impractical at higher frequencies because too many measurement points
are needed. To get a comparable evaluation of the number of measurement points
needed for BF we notice that usually the smallest possible measurement distance
L≈D is applied to get the highest spatial resolution. Since further the resolution
deteriorates quickly beyond a 30° angle from the array axis, the effective mapping
area is only slightly larger than the array area, [1]. Fortunately, by the use of optimised irregular array geometries, good suppression (at least 10 dB) of ghost images
can be achieved up to frequencies where the average element spacing is several
wavelengths, typically 3–4 wavelengths. So to map a quadratic area with a linear
dimension of four wavelengths, NAH requires more than 64 measurement positions, whereas Beamforming can achieve the same results with only one position.
This often makes BF the only feasible solution at high frequencies.
A combined measurement technique using NAH at low frequencies and Beamforming at high frequencies therefore seems to provide the best of both worlds.
However, traditional NAH requires a regular grid array that completely covers the
sound source, while Beamforming provides optimal high-frequency performance
with an irregular array that can be smaller than the sound source. The need for
repeated change between two different arrays would not be practical, but fortunately the new SONAH (Statistically Optimal NAH) technique for NAH calculations can operate with irregular arrays and it also allows for measurement with
arrays smaller than the source, without severe spatial windowing effects, [3].
13
The principle of the combined measurement technique is illustrated in Fig. 2,
using a new so-called ‘Sector Wheel Array’ design, which will be explained further
in the following chapter. Based on two recordings taken with the same array at two
different distances (a nearfield SONAH measurement and a BF measurement at an
intermediate distance), a high-resolution source map can be obtained over a very
wide frequency range. The measurement distance shown for Beamforming is very
small – a bit larger than half the array diameter. Simulations and practical measurements described in this article show that with, for example, the irregular Sector
Wheel Array of Fig. 2, Beamforming processing works well down to that distance.
Fig. 2. Principle of the combined SONAH and Beamforming technique based on two measurements with the same array
Source
•
•
•
•
Holography (SONAH)
12 cm distance
50 –1200 Hz
Resolution ~ 12 cm
Irregular array
Uniform density
2
1
Source
•
•
•
•
Beamforming
50 – 60 cm distance
1000 – 8000 Hz
Resolution ~ 0.7λ
1 metre diameter
60 elements
050081
Array Designs for the Combined Measurement Technique
Considering first the Beamforming application, it is well known that irregular
arrays provide potentially superior performance in terms of low sidelobe level over
a very wide frequency band, i.e., up to frequencies where the average microphone
spacing is much larger than half the wavelength, [1]. The best performance is typically achieved, if the set of two-dimensional spatial sampling intervals is nonredundant, i.e., the spacing vectors between all pairs of microphones are different.
In references [4] and [1] an optimisation technique was introduced to adjust the
14
microphone positions in such a way that the Maximum Sidelobe Level (MSL) is
minimised over a chosen frequency range. The MSL is defined here on the basis of
the so-called Array Pattern, i.e., in connection with a Delay-And-Sum Beamforming method focused at infinite distance, see Appendix and reference [1]. Since typically the MSL has many local minima when seen as a function of the design
variables, an iterative optimisation algorithm will usually stop at a local minimum
close to the starting point. Many starting points are therefore needed to find a “good
solution”. Such starting points can, for example, be generated using random
number generators to “scan” a certain “space of geometries”.
In references [4] and [1] the optimised array geometries were typically Spoke
Wheel Arrays consisting of an odd number of identical line arrays arranged as
spokes in a wheel, see Fig. 3.
Fig. 3. Typical Spoke Wheel Array geometry with 66 microphones optimised for Beamforming
applications
0.6
0.5
0.4
0.3
0.2
0.1
0
-0.1
-0.2
-0.3
-0.4
-0.5
-0.6
-0.6 -0.5 -0.4 -0.3 -0.2-0.1 0 0.1 0.2 0.3 0.4 0.5 0.6
050060
The odd number of spokes is chosen to avoid redundant spatial 2D sampling intervals. The optimisation for low MSL ensures good suppression of ghost images
over a wide frequency range, when the array is used at sufficiently long measurement distances, typically down to distances equal to the array diameter. If the distance becomes much smaller than that, the fairly non-uniform density of the
microphones across the Spoke Wheel Array area starts to have the effect that different points on the source plane will get very different exposure from the array. In
that case, a more uniform density might be better, and numerical simulations have
confirmed that hypothesis – specifically for the so-called Sector Wheel Arrays.
15
When the same array has also to be used for near-field holography measurements
at very small measurement distances, a more uniform density is even more important. This will be covered in more detail in the following text.
Various irregular array designs have been published that exhibit a more uniform
density of the microphones over the array area and still maintain low sidelobe level
over a wide frequency band, for example, the spiral array of [5] and the Packed
Logarithmic Spiral array, [6]. These arrays, however, lack the rotational symmetry
of the Wheel Array that allows a modular construction and that can be exploited
very efficiently in a numerical optimisation to minimise the MSL. Therefore, the
Sector Wheel Array geometry was developed. Fig. 4 shows a Packed Logarithmic
Spiral array with 60 elements, a Sector Wheel Array with 60 elements and a Sector
Wheel Array with 84 elements. For all three arrays the diameter is approximately 1
meter.
Fig. 4. Three different irregular array geometries with uniform element density. The enclosing
circle around all three arrays has a diameter of 1.2 meters, so the array diameters are all
around 1 meter
Packed Log. Spiral (60)
Sector Wheel (60)
Sector Wheel (84)
050061
The Sector Wheel Arrays maintain the rotational symmetry of the Spoke Wheel
Arrays, but angularly limited sectors replace the small line arrays of the wheel.
Each one of the identical sectors contains in this case 12 elements in an irregular
pattern, which has been optimised to minimise the MSL of the array.
Fig. 5 shows the Maximum Sidelobe Level (MSL) as a function of frequency for
the three array geometries of Fig. 4, and the Spoke Wheel of Fig. 3, assuming
focusing of the array to be within 30° from the array axis. If free focusing is
required (i.e., up to 90° from the array axis), then the numbers on the frequency
axis have to be multiplied by a factor of 0.75, [1]. Clearly, the 84-element array has
16
very low sidelobe level at frequencies below approximately 2000 Hz, which for
free focusing angle would be 1500 Hz. With the array very close to the noise
source, as required for holography processing, free focusing angle has to be considered, because waves will be incident from all sides. The 1500 Hz limit turns out to
be just a little bit below the frequency, where the average spacing between the elements of the array is half a wavelength. The average element spacing is approximately 10 cm.
Fig. 5. Maximum Sidelobe Level (MSL) for the three different array geometries of Fig. 4 and
the Spoke Wheel of Fig. 3. The focusing of the array is restricted here to within 30° from the
array axis
Frequency (Hz)
0
1000
2000
3000
4000
5000
6000
7000
8000
0
-3
-6
MSL (dB)
-9
-12
-15
-18
-21
-24
Packed Log. Spiral
Sector Wheel 60
Sector Wheel 84
Spoke Wheel 66
050062
Optimisation of the Sector Wheel Array geometries in Fig. 4 has been performed
by adjusting (using a MiniMax optimisation program) the coordinates of the elements in a single sector in such a way that the maximum MSL is minimised over
the frequency range of interest. In this process a limit was put on the MSL up to
1500 Hz for the 84-element array and up to 1200 Hz for the 60-element array. As it
turned out, this helped maintain the uniform element distribution and therefore the
possibility of using the array for holography at frequencies with less than half
wavelength average spacing.
The four array designs represented in Fig. 5 will typically be used for Beamforming applications in the frequency range from around 1000 Hz and up to
8000 Hz. Here, the Packed Log Spiral has the highest MSL, which is not surprising
since it has not been numerically optimised for minimum MSL. As expected, the
84-channel Sector Wheel has the lowest MSL, since it has been optimised and uses
the largest number of elements. The Spoke Wheel array is a bit better than the cor17
responding 60-element Sector Wheel over the Beamforming frequency range. But
the Sector Wheel is significantly better over a rather wide range of low frequencies,
where it applies for SONAH holography.
Simulation of Beamforming Measurements at a Small
Source Distance
Some simulated measurements were performed to investigate how well the three
uniform arrays of Fig. 4 would perform with Beamforming from a measurement
distance of 0.6 m, i.e., a bit more than half the array diameter. The results are
shown in Fig. 6 for the case of 5 uncorrelated monopoles of equal strength at
8000 Hz.
Fig. 6. Simulated measurements on 5 monopoles at 8 kHz and at a measurement distance of
60 cm with the three array designs shown in Fig. 4. The displayed range is 10 dB
0.5
0.4
0.3
0.2
0.1
0
-0.1
-0.2
-0.3
-0.4
-0.5
-0.5
0
Packed Log. Spiral (60)
0.5 -0.5
0
Sector Wheel (60)
0.5
-0.5
0
0.5
Sector Wheel (84)
050063
The Beamforming calculations have been performed using the Cross-spectral
algorithm (with exclusion of Auto-spectra) described in reference [1], focused on
the source plane at 0.6 m distance. Compared to Fig. 5 the Auto-spectral exclusion
reduces the MSL by approximately 1 dB at 8000 Hz for the 84-element Sector
Wheel and by approximately 0.5 dB for the three other arrays. For all three plots in
Fig. 6 the displayed dynamic range is 10 dB, and as expected from Fig. 5, the two
60-element arrays have very comparable performance at 8 kHz with a small advantage to the Sector Wheel Array. The 84-element Sector Wheel Array has 3 dB
lower sidelobe level at 8 kHz, and therefore there are no visible ghost images in
Fig. 6. The MSL values are seen to be slightly higher at the very short measurement
18
distance than for the infinite focus distance represented in Fig. 5 – typically around
2 dB higher. The 8 kHz data presented in Fig. 6 are not entirely representative for
the relative performance of the three arrays over the full frequency range. If we
look instead at 3 kHz, then according to Fig. 4 the 60-element Sector Wheel Array
has approximately 6 dB lower sidelobe level than the 60-element Packed Logarithmic Spiral.
The following consideration illustrates the advantage of Beamforming over
NAH for source location at high frequencies. If the maps in Fig. 6 had been produced with traditional NAH, then a measurement grid with dimensions around
1.2 × 1.2 m would have had to have been used, with a grid spacing of around 2 cm –
this would have meant approximately 3600 measurement positions!
Numerical Simulations to Clarify the Suitability of the
Arrays for Holography
Another series of simulations were performed to investigate the frequency ranges
over which the three arrays of Fig. 4 and the Wheel Array of Fig. 3 were suited to
SONAH holography measurements. In SONAH (and other types of NAH) a complete reconstruction of the entire near field is attempted over a 3D region around
the measurement area. This is possible only if the spatial samples of the sound
field taken by the array microphones provide at least a complete reconstruction of
the pressure field over the area covered by the array. So from the available spatial
samples it must be possible to reconstruct (interpolate) the sound pressure across
the measurement area. This can be done by the SONAH algorithm.
The problem of reconstructing a (2D) band-limited signal from irregular samples
has been covered quite extensively in the literature; see for example [7]. In order
that the reconstruction can be performed in a numerically stable way, it is necessary
that the distribution of the sampling (measurement) points exhibit some degree of
uniform density across the sampling area. Such a criterion was used in the design of
the Sector Wheel Arrays.
For the numerical simulations, a set of (eight) monopole point sources at 30 cm
distance from the array was used. All point sources were inside an area of the same
size as the arrays. For each frequency and each point source, the complex pressure
was calculated at the array microphone positions, and SONAH was then applied to
calculate the sound pressure over a dense grid of points inside the measurement
area in the measurement plane. For this a 40 dB dynamic range was used, [3]. The
interpolated pressure from each monopole was then compared with the known
19
pressure from the same monopole, and the Relative Average Error was estimated at
each frequency as the ratio between a sum of squared errors and a corresponding
sum of squared true pressure values. The summation was, in both cases, over all
interpolation points and all sources.
Fig. 7 gives a comparison of the Relative Average Interpolation Errors obtained
with the four different arrays.
Fig. 7. Comparison of Relative Average Interpolation Error for the three arrays in Fig. 4 and
the Wheel Array of Fig. 3. The error is averaged over a set of monopole point sources at a
distance of 30 cm
Frequency (Hz)
Relative Average error (dB)
0
500
1000
1500
2000
2500
30000
0
-5
-10
-15
-20
-25
-30
-35
Packed Log. Spiral
Sector Wheel 60
Sector Wheel 84
Spoke Wheel 66
-40
050064
Clearly, the 84-element optimised Sector Wheel Array can represent the sound
field over the array area up to the previously mentioned 1500 Hz (approximately),
while the 60-element Sector Wheel Array provides acceptable accuracy only up to
around 1200 Hz. This actually means that the two Sector Wheel Arrays apply over
the same frequency ranges as regular arrays with the same average element spacing. The 60-element Packed Logarithmic Spiral is seen to perform much like the
60-element Sector Wheel Array in connection with SONAH calculations. But as
expected, the 66-element Spoke Wheel Array performs much poorer then the three
arrays with more uniform element density. The acceptable interpolation accuracy
(20 dB suppression of errors) is achieved up to approximately 700 Hz only. If the
monopole sources had been positioned nearer the array, this upper frequency
would have been even lower.
20
Intensity Scaling of Beamformer Output
When combining low-frequency results obtained with SONAH and high-frequency results obtained with Beamforming it is desirable to have the results scaled
in the same way. This is not straightforward, however, as will be apparent from the
following description of the basic output from SONAH and Beamforming.
Based on the measured pressure data, SONAH builds a sound field model valid
within a 3D region around the array, and using that model it is possible to map any
sound field parameter. Typically the sound intensity normal to the array plane is
calculated to get the information about source location and strength. Since the
measurement is taken very near the sources, the energy radiated in any direction
within a 2p solid angle will be captured and included with the sound intensity and
sound power estimates.
Beamforming, on the other hand, is based on a measurement taken at some intermediate distance from the sources where only a fraction of the 2p solid angle is
covered by the array. Rather than estimating sound field parameters for the source
region, directional filtering is performed on the sound field incident towards the
array. As a result only the relative contributions to the sound pressure at the array
position from different directions is obtained. Reference [8] describes a scaling of
the output that allows the contribution at the array position from specific source
areas to be read directly from the Beamformer maps. This is, of course, meaningful
only if the pressure distribution across the array area from the various partial
sources is fairly constant, which will be true if the array covers a relatively small
solid angle as seen from the sources.
But in the context of this article, we wish to take BF measurements as close as
possible to the source area, in order to obtain the best possible spatial resolution. As
a consequence, the radiation into a rather large fraction of the 2π solid angle is
measured. We should therefore be in a better position to get information about (for
example) the sound power radiated through the source plane. If we want to scale
the Beamformer output in such a way that the scaled map represents the source
strength (in some way) it seems logical to scale it as active sound intensity, because
that quantity represents the radiation towards the array and into the far-field region.
Nearfield pressure contains evanescent components that are not picked up by the
array.
The Appendix describes the derivation of a method to scale the output from a
Delay-And-Sum Beamformer in such a way that area integration of the scaled output provides a good estimate of the sub-area sound power. For that reason it is natural to use the term “Sound Intensity Scaling” about the method. The derivation is
21
performed looking at a single monopole point source in the far-field region, and
assuming that the array provides a good angular resolution, i.e., assuming the mainlobe covers only a small solid angle. An evaluation is then given of the errors introduced by the far-field assumption and the assumption of a narrow mainlobe. This is
done both for Delay-And-Sum processing and for the Cross-spectral algorithm
with exclusion of Auto-spectra. The main conclusions are for the 60-element Sector Wheel Array of Fig. 4 and for frequencies above 1200 Hz:
1) The error is less than 0.4 dB when using a measurement distance not smaller
than the array diameter.
2) At smaller measurement distances the error increases, but it does not exceed
approximately 0.6 dB when the distance is larger than 0.6 times the array
diameter.
In the Appendix it is argued that if the scaling works for a single omni-directional
source, then it holds also for a set of incoherent monopole sources in the same
plane. If sources are partially coherent and/or if single sources are not omni-directional, then because of the limited angular coverage of the array, accurate sound
power estimation cannot possibly be obtained. Fortunately, many real-world
sound sources tend to have low spatial coherence in the frequency range where
Beamforming will be used in the combined NAH/BF method.
The derivation of the scaling is based on matching the area-integrated map with
the known sound power for a monopole sound source. In the derivation, area integration was performed only over the hot spot corresponding to the mainlobe of the
Beamformer. At high frequencies many sidelobes will typically be within the mapping area, and it turns out that area-integration over a large number of sidelobes
will typically contribute significantly to the sound power. This effect can be
avoided in practice by the use of a finite dynamic range during the area integration,
typically around 10 dB. A frequency dependent adjustment of the integration area
to match the resolution is not practical.
The measurement results to be presented in the following section will show the
influence of measurement distance, size of the power integration area and of the
presence of more than a single source. Also, the sound power estimates will be
compared with sound power data obtained from sound intensity maps measured
with a sound intensity probe.
22
Measurements
In order to test the performance of the 60-element Sector Wheel Array of Fig. 4,
measurements were taken at 12 cm distance from two small loudspeakers for
SONAH processing, and at 55 cm and 100 cm distance for Beamforming processing. The microphones used in the array were Brüel & Kjær Type 4935. At all three
distances, measurements were taken with coherent and incoherent white-noise
Fig. 8. 1/3-octave sound intensity maps for the measurements with only the speaker on the
right excited by broadband random noise. The four rows represent Beamforming measurements from 100 and 55 cm distance, SONAH from 12 cm distance and measurements with a
sound intensity probe at 7 cm distance. The 1/3-octave centre frequencies are shown at the
top of the columns. Dynamic range is 15 dB
200 Hz
500 Hz
1 kHz
2 kHz
5 kHz
Beamforming from 100 cm
Beamforming from 55 cm
SONAH from 12 cm
Intensity Probe
23
excitation of the two speakers and also with only one speaker excited. For each of
these three excitations, a scan was performed approximately 7 cm in front of the
two loudspeakers with a Brüel & Kjær sound intensity probe Type 3599. The two
speakers were identical small PC units with drivers of diameter 7 cm and they were
mounted with 17 cm between the centers of the drivers. The Beamforming processing was performed with the Cross-spectral algorithm with exclusion of Auto-spectra, [1].
Fig. 8 shows 1/3-octave sound intensity maps for the measurements with only
the speaker on the right excited. The arrangement of the speaker can be seen in
some of the contour plots.
The four rows of contour plots represent the Beamforming measurements taken
from a distance of 100 cm and 50 cm, the SONAH measurements taken from 12 cm
distance and the measurements taken with an intensity probe from a distance of
7 cm. For the first three rows (representing Beamforming and SONAH results) the
sound intensity has been estimated in the source plane over an area of approximate
size 80 cm × 80 cm, while the last row shows the sound intensity measured 7 cm
from the plane of the speakers over an area of size 36 cm × 21 cm. All plots show a
15 dB dynamic range from the maximum level, with 1.5 dB steps between the colours. Yellow/orange/green colours represent outward intensity and blue colors represent inward intensity. The absolute levels will be presented subsequently through
area integrated sound power data.
The resolution obtained with Beamforming and SONAH is in good agreement
with the expectations as shown in Fig. 1. The bend on the resolution curve for
SONAH is in this case at approximately 1500 Hz, being determined by ½k = L
where L is the measurement distance and k is the wavelength. Clearly, at low frequencies the Beamforming resolution is very poor, while above approximately
1.5 kHz it is approximately as good as that obtained with the sound intensity probe.
SONAH provides good resolution over the entire frequency range, but above
approximately 1200 Hz, the average spacing of the microphone grid is too large to
reconstruct the sound pressure variation across the measurement area. As a result,
distortions will slowly appear as frequency increases, and more evidently – the
level will be underestimated as can be seen in the sound power spectra of Fig. 11
and Fig. 12. The combined method using SONAH up to 1250 Hz and Beamforming at higher frequencies provides good resolution at all frequencies, and the sound
power estimate is also good as shown in Fig. 11 and Fig. 12. So two recordings
taken with the Sector Wheel array at two different distances can provide the information obtained with a time consuming scan with an intensity probe (104 positions
24
for the small plots in Fig. 8). In addition, many other types of analysis can be performed based on the same data, such as transient analysis of radiation phenomena.
As mentioned previously, the sound intensity scaling of the output from Beamforming is defined in such a way that area integration over the mainlobe area will
provide a good estimate of the sound power from a monopole point source. Fig. 9
depicts the 1/3-octave sound power spectra for the single speaker obtained from the
Fig. 9. 1/3-octave sound power spectra for the single speaker measurement. The intensity
probe map has been integrated over the entire mapping area shown in Fig. 8. The Beamforming measurement, taken at a distance of 55 cm, has been integrated over the full mapping
area and over the mainlobe area only
80
Sound Power (dB)
75
Intensity Probe
BF, 55 cm, Full area
BF, 55 cm, Small area
70
65
60
55
50
45
160 200 250 315
400 500 630 800 1000 1250 1600 2000 2500 3150 4000 5000
Frequency (Hz)
050068
scan with the sound intensity probe and from the Beamforming measurement at
55 cm distance. The intensity probe measurement has been integrated over the full
measurement area shown in Fig. 8. Two spectra are shown for the sound power
obtained with the Beamforming measurement: one obtained by integration over the
full mapping area; another obtained by integration over a small rectangular area
with x- and y-dimensions equal to the mainlobe diameter and centered at the
known point source position. The radius of the mainlobe is 1.22kL/D, where k and
L are wavelength and measurement distance (55 cm) as defined above, and where
25
D is the array diameter of approximately 1 m, refer to equation (A15) in the Appendix. At low frequencies the mainlobe is larger than the entire mapping area of 0.8 m
by 0.8 m, and therefore the two Beamforming spectra are identical. Here, the sound
power is underestimated, because the power outside the mapping area is not
included, and also the assumptions made for the sound intensity scaling fail to hold,
refer to the Appendix. At high frequencies the power estimated by Beamforming is
too high, even when the integration covers the mainlobe area only. This is mainly
because the loudspeaker is no longer omni-directional as assumed in the scaling,
but concentrates the radiation in the axial direction, towards the array. At 5 kHz the
diameter of the driver unit is approximately one wavelength. Another reason for
the over-estimation could be the tendency of the intensity scaling to overestimate
when the measurement distance is very small, see Fig. A3. Looking at the sound
power obtained by integration over the entire mapping area, it is even higher at the
high frequencies. The reason is that sidelobes (ghost images) contribute significantly when the integration area is much larger than the mainlobe area, even when
the array has good sidelobe suppression as the present Sector Wheel array.
Fig. 10. 1/3-octave sound power spectra for the single speaker measurement. Again the
intensity probe result is included. But now the results from Beamforming measurements at a
distance of 55 cm and 100 cm are included. For both of these, the sound power integration
covers the entire mapping area
80
Sound Power (dB)
75
Intensity Probe
BF, 55 cm, Full area
BF, 100 cm, Full area
70
65
60
55
50
45
160 200 250 315
400 500 630 800 1000 1250 1600 2000 2500 3150 4000 5000
Frequency (Hz)
26
050069
Fig. 10 shows results similar to those of Fig. 9, but instead of focusing on the
influence of the size of the power integration area, the influence of the measurement distance is now investigated. For both the Beamforming measurements taken
at different distances, the power integration has been performed over the entire
mapping area. At low frequencies, the biggest underestimation results from the
measurement taken at the longest distance, because the resolution is poorer and
consequently a larger part of the power falls outside the mapping area. At high frequencies the measurement at 55 cm distance produces the biggest over-estimation.
There are several reasons for that. One is that the sidelobes become a bit stronger at
measurement distances smaller than the array diameter. Another reason is the better
resolution: a narrower mainlobe means that the ratio between the sidelobe-area and
the mainlobe-area increases significantly. Finally, the scaling tends to over-estimate the sound power when used with measurements taken at very small distances,
as can be seen in Fig. A3.
Fig. 11 shows the 1/3-octave sound power spectra for the single loudspeaker
obtained with intensity probe, SONAH and Beamforming. The Beamforming
Fig. 11. 1/3-octave sound power spectra for the single speaker measurement. The results
obtained with Intensity Probe, SONAH and Beamforming are compared
80
Sound Power (dB)
75
Intensity Probe
SONAH
BF, 55 cm, 10dB Range
70
65
60
55
50
45
160 200 250 315
400 500 630 800 1000 1250 1600 2000 2500 3150 4000 5000
Frequency (Hz)
050070
27
measurement at 55 cm distance has been chosen, and for that measurement the
sound power integration has been performed over the full sound intensity map (see
Fig. 8), but using only a 10 dB range of intensity data (i.e., data points where the
level is less than 10 dB below Peak level are ignored). The result is very close to
that obtained with integration over the mainlobe area only, see Fig. 9. Above
500 Hz this leads to a good estimate of the sound power, apart from the previously
discussed overestimation at the highest frequencies. SONAH provides good sound
power estimates up to approximately 1.6 kHz, apart from a small overestimation
(which could be due to the small measurement area that is used with the sound
intensity probe). But above 1.6 kHz the sound power is increasingly underestimated with SONAH.
As expected, the results with equal but incoherent excitation of the two speakers
are very similar to the results with only one loudspeaker excited. The sound power
spectra all increase by approximately 3 dB over the major part of the frequency
range, but the differences between the spectra remain unchanged. Therefore no
results are shown here.
Fig. 12. 1/3-octave sound power spectra for the case of the two speakers being excited with
the same white noise signal. Results obtained with Intensity Probe, SONAH and Beamforming are compared
80
Sound Power (dB)
75
Intensity Probe
SONAH
BF, 55 cm, 10dB Range
70
65
60
55
50
45
160 200 250 315
400 500 630 800 1000 1250 1600 2000 2500 3150 4000 5000
Frequency (Hz)
28
050071
Equal but coherent in-phase excitation of the two loudspeakers will, on the other
hand, cause the radiation to deviate more from being omni-directional, which violates the assumptions on which the intensity scaling of Beamformer maps are
based.
Fig. 12 depicts the 1/3-octave sound power spectra obtained using intensity
probe, SONAH and Beamforming with identical excitation of the two speakers.
The SONAH spectrum follows the intensity probe spectrum in much the same way
as for the case of only a single speaker being excited. But the sound power obtained
from the scaled Beamformer map shows additional deviation in the frequency
range from 1 kHz to 2 kHz. In that frequency range the distance between the two
speakers is between half a wavelength and one wavelength, which will focus the
radiation in the axial direction. But the deviation remains within approximately
2 dB from the power spectrum obtained with the sound intensity probe.
Conclusions
A new combined array measurement technique has been presented that allows
Near-field Acoustical Holography and Beamforming to be performed with the
same array. This combination can provide high-resolution noise source location
over a very broad frequency range based on two recordings with the array at two
different distances from the source. The key elements in the presented solution are
the use of SONAH for the holography calculation, sound intensity scaling of the
Beamformer output and the use of a specially designed irregular array with uniform element density. The optimised Sector Wheel Array is an example of an
applicable array with very high performance, particularly for the Beamforming
part. Numerical simulations and a set of measurements confirm the strengths of
the combined method and of the Sector Wheel array design. The mentioned functionality is all supported in PULSE Version 9.0 from Brüel & Kjær.
Appendix: Sound Intensity Scaling of Beamformer Output
As illustrated in Fig. A1, we consider a planar array of M microphones at locations
rm (m = 1,2,..., M) in the xy-plane of our coordinate system. When such an array is
applied to Delay-And-Sum Beamforming, the measured pressure signals pm are
individually weighted and delayed, and then all signals are summed, [9]:
29
Fig. A1. Illustration of a phased microphone array, a directional sensitivity represented by a
mainlobe, and a Plane wave incident from the direction of the mainlobe
Plane wave
Main
lobe
␬
k = – k␬
rm
Origin
Phased Array of Microphones
050072
M
1
b (␬, t) = ----M
∑ wm pm ( t – Δm ( ␬ ) )
(A.1)
m=1
The individual time delays Δm are chosen with the aim of achieving selective
directional sensitivity in a specific direction, characterised here by a unit vector ␬.
This objective is met by adjusting the time delays in such a way that signals associated with a plane wave, incident from the direction ␬, will be aligned in time before
they are summed. Geometrical considerations (see Fig. A1) show that this can be
obtained by choosing:
␬ ⋅ rm
Δ m = -------------c
(A.2)
where c is the propagation speed of sound. Signals arriving from other far-field
directions will not be aligned before the summation, and therefore they will not
coherently add up. The ‘weights’ wm on the microphone signals are real numbers.
The frequency domain version of expression (A.1) for the Delay-And-Sum
beamformer output is:
30
M
1
B (␬, ω) = ----M
∑ wm Pm ( ω ) e
–j ω Δm ( ␬ )
m=1
M
1
= ----M
∑ wm Pm ( ω ) e
jk ⋅ r m
(A.3)
m=1
Here, x is the temporal angular frequency, k ≡ –k␬ is the wave number vector of a
fictitious plane wave incident from the direction ␬ in which the array is focused
(see Fig. A1) and k = x/c is the wave number. In equation (A.3) an implicit time
factor equal to e jxt is assumed.
Through our choice of time delays Δm(␬), or the equivalent of the “preferred”
wave number vector k ≡ –k␬, we have “tuned” the beamformer on the far-field
direction ␬. Ideally we would like to measure only signals arriving from that direction, in order to get a perfect localisation of the sound source. To investigate, how
much “leakage” we will get from plane waves incident from other directions, we
assume now a plane wave incident with a wave number vector k 0 which is different
from the preferred k ≡ –k␬. The pressure measured by the microphones will then
ideally be:
Pm ( ω ) = P0 e
– jk 0 ⋅ r m
(A.4)
which according to equation (A.3) will give the following output from the beamformer:
M
P0
B (␬, ω) = -----M
∑ wm e
j ( k – k0 ) ⋅ rm
≡ P0 W ( k – k0 )
(A.5)
m=1
Here, the function W
M
1
W ( K ) ≡ ----M
∑ wm e
jk ⋅ r m
(A.6)
m=1
is the so called Array Pattern. It has the form of a 2D spatial Fourier transform of
a weighting function w, which consists of delta functions at the microphone positions. In the following we will assume all weights wm to equal one. Because the
31
microphone positions rm have z-coordinates equal to zero, the Array Pattern is
independent of Kz. We therefore consider the Array Pattern W only in the (Kx , Ky)
plane, and when it is used, as in equation (A.5), the 3D wavenumber vector is projected onto the (Kx , Ky) plane. In that plane, W has an area with high values
around the origin with a peak value equal to 1 at (Kx , Ky) = (0, 0).
According to equation (A.5), this peak represents the high sensitivity to plane
waves coming from the direction ␬, in which the array is focused. Fig. A1 contains
an illustration of that peak, which is called the mainlobe. Other directional peaks,
which are called sidelobes, will cause waves from such directions to leak into the
measurement of the mainlobe direction ␬, creating so-called ‘ghost sources’ or
‘ghost images’. The Maximum Sidelobe Level (MSL) is defined as the ratio
between the highest sidelobe and the mainlobe for a given frequency range.
In the expression (A.5) for the response to a plane wave, notice that the output is
exactly equal to the amplitude P0 of the plane wave, when the array is focused
towards the direction of incidence of the plane wave, i.e., when k = k0.
For stationary sound fields it is natural to operate with the matrix of cross spectra
between the microphones, which provides a better average representation of the
stationary phenomena. Exclusion of the auto-spectra offers the possibility of reducing the influence of noise in the individual measurement channels, and it turns out
that it also often reduces the sidelobe level, [1]. For the derivation of the sound
intensity scaling we will, however, not use the Cross-spectral formulation. But the
scaling holds for the Cross-spectral formulation as well, as long as it is scaled in
such a way that the response to an in-focus incident plane wave is equal to the
squared amplitude of the wave. The formulation in reference [1] is scaled that way.
The validity of the intensity scaling in combination with the Cross-spectral Beamformer is investigated both through simulations in this appendix and through the
practical measurements.
From the literature it is known that the size and shape of the mainlobe of the
array pattern is determined almost entirely by the size and overall shape of the
array, [9], [1], while the sidelobes are highly affected by the actual positions of the
microphones. The shape of the mainlobe is usually close to the mainlobe from a
“continuous aperture” of the same shape as the array or, equivalently, a very
densely populated array covering the same area. For circular array geometry, the
equivalent continuous aperture has the following array pattern:
32
1
J 1 ⎛ --- KD⎞
⎝2
⎠
W ( K ) = 2 ----------------------- ,
1
--- KD
2
˜
K≡ K
(A.7)
where D is the diameter of the aperture (or of the array), J1 is the Bessel function
˜ is the projection of K onto the (K , K ) plane. What we have
of order 1, and K
x
y
achieved is a general approximation for the shape of the mainlobe, which is independent of the specific positioning of the microphones,
˜ ) for K
˜ ≤K
W( K) ≈ W( K
1
(A.8)
Here, K1 is the first null of the aperture array pattern, W ( K 1 ) = 0 , given by:
1
--- K 1 D = ξ 1 ≈ 3.83
2
(A.9)
ξ 1 being the first null of the Bessel function of the first order.
Derivation of the Scaling
For the derivation we now assume a single monopole point source on the array
axis at a distance L that is so large that the amplitude and phase of the pressure is
practically constant across the array area. Thus, for the array the sound field is a
plane wave with amplitude P0 incident with wave number vector k 0 = – kẑ ,
where ẑ is the unit vector in the z-direction. The sound power Pa radiated by the
monopole is then:
2
2
P0
2 P0
Pa = 4 π L ⋅ I = 4 π L ⋅ ----------- = 2 π L ⋅ ----------2ρc
ρc
2
2
(A.10)
where I is the sound intensity at the position of the array and q is the density of the
medium.
The output from the Delay-And-Sum beamformer is according to equation
(A.5):
33
B ( ␬ ) = P 0 W ( k – k 0 ) = P 0 W ( – k␬ + kẑ )
(A.11)
where the known values of the two wave number vectors have been inserted. In
order to use the approximation (A8) for the mainlobe of the array pattern, we need
to project the wave number vectors onto the xy-plane, which leads to:
B ( ␬ ) ≈ P 0 W ( k sin ( θ ) )
for
k sin ( θ ) ≤ K 1
(A.12)
h being the angle from the array axis (the z-axis) to the focus direction ␬.
The Beamformer is now used to create a source map in the plane z =L. Each position on this source plane is described by its distance R to the z-axis and its azimuth
angle φ. Assuming relatively small angles from the z-axis we can use the approximation:
R = L tan ( θ ) ≈ L sin ( θ )
(A.13)
where h is still the angle to the z-axis. Use of equation (A.13) in (A.12) leads to the
following approximate expression for the “mainlobe” of the beamformed map on
the source plane:
kR
B (R,φ) ≈ P 0 W ⎛ ------⎞
⎝ L⎠
K1 L
for R ≤ ---------- ≡ R 1
k
(A.14)
By the use of equation (A.9), we get for the radius R1 of the mainlobe on the
source plane
K1 L
Lλ
2L
R 1 ≡ ---------- = ------- ξ 1 ≈ 1,22 -----D
k
kD
(A.15)
The scaling factor α needed to obtain the intensity scaled beamformer output,
B I (R, φ) ≡ α ⋅ B (R, φ)
34
2
(A.16)
is now defined in such a way that the integral of BI (R, φ) over the mainlobe equals
half of the radiated sound power Pa , i.e., the power radiated into the hemisphere
containing the array:
1
--- P a =
2
R1 2 π
∫ ∫α
2
B (R, φ) R dR dφ = 2 πα P 0
0 0
2
R1
∫W
2 ⎛ kR⎞
------ R dR
⎝ L⎠
(A.17)
0
Use of equation (A.7), substitution with the variable
kR D kD
u ≡ ------ ---- = ------- R
L 2 2L
(A.18)
for R in equation (A.17) and application of the relation (A.15) leads to
ξ1
2
1
--- P a = 2 πα P 0
2
∫
P0 L 2
J 1 ( u ) 2 2L 2
2 ------------- ⎛ -------⎞ u du = α ⋅ 32 π ⎛ ------------⎞ ⋅ Γ
⎝
⎠
⎝
kD
u
kD ⎠
(A.19)
0
with
ξ1
Γ≡
∫
J1 ( u ) 2
------------ u du ≈ 0,419
u
(A.20)
0
The scaling factor can finally be obtained through use of the expression (A.10) for
the sound power in equation (A.19):
2
1 ( kD )
2,94 D
α = --------- ------------ ≈ ---------- ⎛⎝ ----⎞⎠
32Γ ρ c
ρc λ
2
(A.21)
Clearly, the scaling factor is proportional to the square of the array diameter measured in wavelengths. This is natural, because the un-scaled beamformer output,
with the array focused towards the point source, is basically independent of array
35
geometry, but the width of the mainlobe is inversely proportional to the array
diameter measured in wavelengths (refer to equation A.15). To maintain the areaintegrated power with increasing array diameter, the scaling factor must have the
proportionality mentioned above.
Evaluation of Errors
The major principle of the scaling is that area integration of the scaled output must
provide a good estimate of the sub-area sound power. For that reason it is natural
to use the term “Sound Intensity Scaling” about the method. The scaling is defined
for a single omni-directional point source in such a way that area integration of the
peak created by the mainlobe equals the known radiated power from the point
source. So by this definition the total power will be within the mainlobe radius
from the source position, and integration over a larger area will cause an overestimation of the sound power. One reason for choosing this definition is that only the
mainlobe has a form that depends only on the array diameter and not on all microphone positions. Other choices would be somewhat arbitrary, would require integration over a larger area to get the total power and would need the scaling factor
to depend on the particular set of microphone positions. But the influence of the
sidelobes on the power integration is a drawback – if the mainlobe is rather narrow
and sound power integration is performed over an area much larger than the size
of the mainlobe on the source plane, then the level of sidelobes often present in
beamforming can contribute significantly to the power integration and cause a significant over-estimation of the sound power. The solution adopted to avoid this
significant over-estimation is to use only a finite dynamic range of sound intensity
in the area integration, typically around 10 dB. The applied dynamic range, however, should depend on the MSL of the array.
The scaling was derived for a single omni-directional point source on the array
axis. Beyond that we have assumed the monopole to be so far away from the array
that its sound field has the form of a plane wave across the array. Thus, we have
assumed the source to be in the far-field region relative to the array. The second
important assumption introduced in equations (A.13–A.14) above is that the mainlobe covers a relatively small solid angle. To investigate the effect of the last two
assumptions, a series of simulations have been performed with the 60-channel Sector Wheel Array of Fig. 4 and with a single monopole point source at different distances on the array axis and operating at different frequencies. The beamforming
calculation has been performed with two different beamformers:
36
1)
A Delay-And-Sum beamformer focused at the finite source distance, but
without any amplitude/distance compensation, [1].
2) The Cross-spectral beamformer with exclusion of Auto-spectra described in
reference [1]. This method compensates for the amplitude variation across
the array of the sound pressure from a monopole on the source plane.
The output has then been scaled as sound intensity through multiplication with the
scaling factor a of equation (A.21), and finally the sound power has been estimated by integration over a circular area with radius equal to R1 (refer to equation
A.15) around the array axis.
Fig. A2 shows the ratio between the estimated and the true sound power in deciFig. A2. Difference in decibels between estimated and true Sound Power. The estimated
value is from an Intensity scaled Delay-And-Sum Beamformer. The source is a monopole on
the array axis
0.8
1000 Hz
1500 Hz
2000 Hz
4000 Hz
8000 Hz
0.7
Power Error (dB)
0.6
0.5
0.4
0.3
0.2
0.1
0
-0.1
-0.2
0
0.5
1
1.5
2
2.5
Distance (m)
3
3.5
4
4.5
5
050073
bels for the case of the Delay-And-Sum beamformer. At 1000 Hz the mainlobe
(and therefore the hot spot generated around the source position on the array axis)
covers an angle of approximately 24° from the array axis. This will introduce a significant error in equations (A.13–A.14) and therefore an error in the estimated
sound power, even when the source distance is relatively large. Fortunately,
SONAH applies below 1200 Hz (approximately) for the particular array, so beamforming will be used typically only above that frequency, and here the error is quite
small, provided the measurement distance is not too small. The error increases
quickly for distances smaller than approximately 1 m, which is the approximate
diameter of the array. Here, the assumption of the source being in the far-field
37
region relative to the array certainly does not hold. But fortunately the error does
not get worse than 0.6 dB (approximately) for distances down to half the array
diameter. To achieve the best possible resolution it is desirable to use the array at
measurement distances as small as this.
Fig. A3 shows the difference between the estimated and the true sound power in
decibels for the case of the Cross-spectral beamformer with exclusion of Autospectra.
Fig. A3. Difference in decibels between estimated and true Sound Power. The estimated
value is from an Intensity scaled Cross-spectral Beamformer with exclusion of Auto-spectra.
The source is a monopole on the array axis
0.8
1000 Hz
1500 Hz
2000 Hz
4000 Hz
8000 Hz
0.7
Power Error (dB)
0.6
0.5
0.4
0.3
0.2
0.1
0
-0.1
-0.2
0
0.5
1
1.5
2
2.5
Distance (m)
3
3.5
4
4.5
5
050074
This algorithm is implemented in Brüel & Kjær’s Stationary and Quasi-stationary
Beamforming calculation software, and therefore it has been used for the measurements presented in this article. Comparison of Fig. 3 and Fig. 4 shows that in general the Cross-spectral algorithm produces smaller errors then the Delay-And-Sum
algorithm, except at the very short measurement distance of 0.5 m.
It is, of course, also important to consider, how the sound intensity scaling works
for more realistic source distributions than a single monopole. Consider first the
case of several omni-directional, but mutually incoherent point sources in the
source plane. The incoherent sources will contribute independently to the Crossspectral matrix between the microphones, i.e., the matrix will be the sum of elementary matrices related to each one of the point sources. If a Cross-spectral Beamformer is used, then the (power) output will equal the sum of contributions from the
elementary matrices, meaning that the incoherent partial sources contribute addi38
tively to the Beamformer (power) output. Since they contribute also additively to
the Sound Power, the conclusion is that the intensity scaling will hold for a set of
incoherent monopole point sources.
When there is full or partial coherence between a set of monopole sources, the
radiation from the total set of sources is no longer omni-directional, which will
introduce an error that cannot be compensated: The array covers only a certain
part of the 2π solid angle for which the sound power is desired. For angles not covered by the array we do not know the radiation and therefore we cannot know the
sound power.
References
[1]
Christensen J.J., Hald J., “Beamforming”, Brüel & Kjær Technical Review,
No. 1 (2004).
[2]
Williams E.G., “Fourier Acoustics – Sound Radiation and Nearfield Acoustical Holography”, Academic Press (1999).
[3]
Hald J., “Patch Near-field Acoustical Holography using a New Statistically
Optimal Method”, Proceedings of Inter-noise 2003.
[4]
Hald J., Christensen J.J., “A Class of Optimal Broadband Phased Array
Geometries designed for Easy Construction”, Proceedings of Inter-noise
2002.
[5]
Nordborg A., Wedemann J., Willenbrink L., “Optimum Array Microphone
Configuration”, Proceedings of Inter-noise 2000.
[6]
Boeringer D.W., “Phased Array including a Logarithmic Lattice of Uniformly Spaced Radiating and Receiving Elements”, United States Patent
US 6,433,754 B1.
[7]
Unser M., “Sampling – 50 Years After Shannon”, Proceedings of the IEEE,
Vol. 88, No. 4, April 2000.
[8]
Oerlemans S., Sijtsma P., “Determination of Absolute Levels from Phased
Array Measurements Using Spatial Source Coherence”, AIAA 2002-2464.
[9]
Johnson D. H., Dudgeon D. E., “Array Signal Processing: Concepts and
Techniques”, Prentice Hall, New Jersey (1993).
39
Patch Near–field Acoustical Holography
Using a New Statistically Optimal Method
J. Hald
Abstract
The spatial FFT processing used in Near-field Acoustical Holography (NAH)
makes the method computationally efficient, but it introduces severe spatial windowing effects, unless the measurement area is significantly larger than the source.
A Statistically Optimal NAH (SONAH) method is introduced which performs the
plane-to-plane calculations directly in the spatial domain. Therefore, the need for a
representation in the spatial frequency domain and for zero padding is avoided,
significantly reducing the spatial windowing effects. This article describes the
SONAH algorithm and presents some results from numerical simulations and
practical measurements.
Résumé
Le traitement FFT spatial utilisé dans l’holographie acoustique en champ proche
(NAH) rend la méthode efficace sur le plan computationnel mais s’accompagne
d’effets de fenêtrage inopportuns, sauf dans le cas où la surface mesurée est significativement plus grande que la source. La méthode faisant intervenir l’algorithme
Statistically Optimal NAH (SONAH) est ici présentée. Les calculs étant effectués
plan à plan dans le domaine spatial, elle évite le besoin d’une représentation dans le
domaine de fréquence spatial et d’un calage du zéro, réduisant de manière significative les effets de fenêtrage spatial. Cet article décrit l’algorithme SONAH et présente plusieurs résultats obtenus par simulation numérique et par mesures
pratiques.
Zusammenfassung
Die bei der akustischen Nahfeldholographie (NAH) verwendete räumliche FFT
macht die Methode rechnerisch effizient, führt jedoch andererseits zu einem beträchtlichen räumlichen Fenstereffekt, sofern die Messfläche nicht wesentlich größer ist als die Quelle.
40
Es wird eine statistisch optimierte Methode der akustischen Nahfeldholographie
(SONAH) vorgestellt, die Berechnungen von Ebene zu Ebene direkt im räumlichen
Bereich ausführt. Damit entfällt die Notwendigkeit für eine Darstellung im räumlichen Frequenzbereich und für Zero Padding, wodurch räumliche Fenstereffekte
wesentlich reduziert werden. Dieser Artikel beschreibt den SONAH Algorithmus
und stellt Ergebnisse numerischer Simulationen und praktischer Messungen vor.
Introduction
A plane-to-plane propagation of a sound field away from the source can be
described mathematically as a 2D spatial convolution with a propagation kernel. A
2D spatial Fourier transform reduces this convolution to a simple multiplication by
a transfer function. In Near-field Acoustical Holography (NAH) the Fourier transform is implemented as a spatial FFT of the pressure data measured over a finite
area.
The use of spatial FFT and multiplication with a transfer function in the spatial
frequency domain is computationally very efficient, but it introduces some errors.
The discrete representation in the spatial frequency domain introduces periodic
replica in the spatial domain, causing “wrap-around errors” in the calculation
plane. A standard way of spacing the replica away from the real measurement area
is to use zero padding, introducing, however, a sharp spatial window. Such a window causes spectral leakage in the spatial frequency domain [1]. As a consequence,
the measurement area must be significantly larger than the source to avoid very disturbing window effects. This is a problem, for example, in connection with Time
Domain NAH, [2], and Real-time NAH, which do not allow the synthesis of a large
measurement area through scanning. The new Statistically Optimal NAH
(SONAH) method performs the plane-to-plane transformation directly in the spatial domain rather than going via the spatial frequency domain, [3].
Theory of SONAH
The derivation of the SONAH algorithm given in this section is an extension of
the derivation given in reference [4]. It is different from, and probably simpler
than, the one given in reference [3].
We consider a complex, time-harmonic sound pressure field p(r) = p(x, y, z) with
frequency f and wave number k = x/c = 2p/k where x = 2pf is the angular frequency, c is the propagation speed of sound and k is the wavelength. For the following description we shall assume that the half space z ≥ –d is source free and
41
homogeneous, i.e., the sources of the sound field are for z < –d as shown in Fig. 1.
The array measurements are performed in the plane z = 0.
Fig. 1. Geometry
y
Measurement
plane
z
Source
region
d
050075
From the theory of NAH, [1], for example, it is well-known that the sound field
for z ≥ –d can be written as an infinite sum of plane propagating and plane evanescent waves:
∞ ∞
1
p ( r ) = -------------P ( K )Φ K ( r )dK
2
( 2 π ) –∞ –∞
∫∫
(1)
Here, K ≡ (kx , ky) is a wave number vector, P(K) is the Plane Wave Spectrum, and:
Φ K ( x, y, z ) ≡ e
–j ( kx x + ky y + kz ( z + d ) )
(2)
are plane propagating and plane evanescent wave functions.
The z-component kz of the 3D wave number vector is the following function of
K:
⎧
2
2
⎪ k – K
kz = kz ( K ) ≡ ⎨
⎪ –j K 2 – k2
⎩
42
for
K ≤k
for
K >k
(3)
Notice that the elementary wave functions ΦK have identical amplitude equal to
one on the source plane z = –d. The evanescent wave functions outside the Radiation Circle, i.e., for K > k , are decaying exponentially away from the source.
Since equation (1) has the form of an inverse spatial Fourier transform, the Plane
Wave Spectrum P is a representation of the sound field in the spatial frequency
domain.
We assume that the complex sound pressure p(rn) has been measured at N positions rn ≡ (xn , yn , 0) on the measurement plane. We wish to estimate the pressure
p(r) at an arbitrary position r ≡ (x, y,z) in the source free region z ≥ –d, and we wish
to estimate p(r) as a linear combination of the measured sound pressure data p(rn):
N
p(r) ≈
∑ cn ( r ) ⋅ p ( rn )
(4)
n=1
In order that equation (4) can provide good estimates for all sound fields with
sources for z ≤ –d, it must in particular provide good estimates for the elementary
plane wave functions Φ K . If, on the other hand, equation (4) provides good estimates for all Φ K , then it provides good estimates for any sound field with sources
for z ≤ –d.
We therefore require formula (4) to provide good estimation for a finite sub-set
of these elementary wave functions:
N
ΦK ( r ) ≈
m
∑ cn ( r ) ⋅ ΦK
m
( r n ),
m = 1…M
(5)
n=1
Solution of this set of linear equations in a ‘least squares’ sense, for the coefficients cn, means that we obtain the estimator (4) that is optimal for sound fields
containing only the chosen function sub-set, and with approximately equal content
of each function, i.e., with equal content of a set of spatial frequencies. Since all
functions have amplitude equal to one on the source plane, the estimator is optimised for Plane Wave Spectra P, which are “white” in the source plane.
To solve (5) in a least squares sense we arrange the quantities in matrices and
vectors:
A ≡ [ ΦK ( rn ) ]
m
␣ ( r ) ≡ [ ΦK ( r ) ]
m
c ( r ) ≡ [ cn ( r ) ]
(6)
43
This allows (5) to be written as follows:
␣ ( r ) ≈ Ac ( r )
(7)
The regularised least squares solution to (7) is:
–1
2
c ( r ) = ( A† A + θ I ) A† ␣ ( r )
(8)
where A† is the conjugate transpose of A, I is a unit diagonal matrix and h is a
regularisation parameter. We now let the number M, of elementary wave functions
used to determine the estimation coefficients increase towards infinity, and we let
the distribution of these wave functions in the K domain approach a continuous
distribution:
[ A† A ] nn' =
∑ ΦK
*
m
m
[ A† ␣ ] n =
∑ ΦK
*
m
m
1
-------( r n )Φ K ( r n' ) ⎯⎯→
m →∞
2
m
πk
1
-------( r n )Φ K ( r ) ⎯⎯→
m →∞
2
m
πk
∫ ∫ Φ K ( rn )ΦK ( rn' )dK
*
∫ ∫ Φ K ( rn )ΦK ( r )dK
*
(9)
(10)
Here, * represents complex conjugate and the integration is over the 2D plane
wave spectrum domain. Notice that the switch in equations (9) and (10) to integral
representation introduces an identical re-scaling of the matrices A†A and A†␣.
This implies a re-scaling of the regularisation parameter h of equation (8).
The matrix A†A can be seen as an Auto-correlation matrix for the set of measurement positions, while A†␣ can be seen as containing cross correlations between
the measurement points and the calculation position.
The integrals in equations (9) and (10) can be reduced analytically by conversion
of K to polar co-ordinates: K = (kx , ky) = (K cos(w), K sin(w)). We introduce the xyposition vector R ≡ (x, y) and let Rn be the xy-component of rn. From equations (9)
and (2) we get:
1
[ A† A ] nn' = -------2
πk
1
= -------2
πk
44
∫ ∫ Φ K ( rn )ΦK ( rn' )dK
∫∫
*
*
e
j ( k z – k z )d jK ( R n – R n' )
e
dK
(11)
and further by polar angle integration and use of (3):
∞
[ A† A ]
*
j ( k z – k z )d
1
-------=
π
e
J 0 ( KR nn' )KdK
2
nn'
2
πk
∫
0
∞
k
= 2k
–2
∫
J 0 ( KR nn' )KdK + 2 k
–2
0
∫
e
2
2
–2 K –k d
J 0 ( KR nn' )KdK
(12)
k
∞
2 2
J 1 ( kR nn' )
–2
–2 K –k d
= 2 ----------------------- + 2k
e
J 0 ( KR nn' )KdK
kR nn'
∫
k
where R nn' ≡ R n – R n' . Equation (10) can be treated in a similar way.
Clearly, all diagonal elements of the Autocorrelation matrix A†A are identical
because Rnn = 0 for all n, and the value can be shown to be:
1
[ A† A ] nn = 1 + ---------------2
2 ( kd )
(13)
To solve for the vector c of prediction coefficients in equation (8), we need to
choose the regularisation parameter h. It is shown in reference [3] that under some
assumptions the optimal value is given by:
θ
2
SNR
– -----------
10
1
= ⎛ 1 + ----------------⎞ ⋅ 10
⎝
2⎠
2 ( kd )
(14)
where SNR is the effective Signal-to-Noise-Ratio in Decibels for the microphone
signals, taking into account all sources of error.
We can now estimate the pressure at the position r through use of equation (4):
N
p(r) ≈
∑ cn ( r ) ⋅ p ( rn ) = p
T
T
2
–1
c ( r ) = p ( A† A + θ I ) A† ␣ ( r )
(15)
n=1
45
Here, p is a vector containing the measured pressure signals, and we have used
equation (8). Notice that the vector pT(A†A+h 2 I)-1 of de-correlated pressure data
over the microphone positions needs to be calculated only one time. After that it
can be used for calculation of the pressure at many other positions r using other
cross correlation vectors A†␣ (r).
The particle velocity can be obtained in the same way as a linear combination of
the measured pressure signals. To derive the required estimation coefficients we
start with an equation equivalent to (5), but with the particle velocity of the elementary wave functions on the left-hand side. As a result, we obtain the following
expression for the particle velocity:
T
2
–1
u z ( r ) ≈ p ( A† A + θ I ) A † ␤ ( r )
(16)
where A†␤ is a vector of correlations between the pressure at the microphone
positions and the particle velocity at the calculation position. Notice that the vector
pT(A†A+h 2 I)-1 of de-correlated measured pressure data from equation (15)
applies also in equation (16).
Based on the sound pressure and the particle velocity, the sound intensity can be
calculated.
Numerical Simulations
A set of measurements was simulated with the set-up illustrated in Fig. 2.
The grid represents an 8 × 8 element microphone array with 3 cm grid spacing,
the microphones being at the corners of the grid. Two coherent in-phase monopole
point sources of equal strength are positioned 6 cm below the array, i.e., at a distance that is twice the grid spacing. The positions of the point sources are indicated
in Fig. 2 by black dots. Clearly, the array does not cover the entire source area, so
NAH will introduce severe spatial window effects. SONAH calculations were performed in the measurement plane (z = 0) and in a plane half way between the source
plane and the measurement plane (z = –3cm). The calculation grid had the same
geometry as the measurement grid. The regularisation parameter in equation (8)
was set according to an SNR equal to 40 dB, and the source distance d was set to
6 cm.
First, the accuracy of the particle velocity estimation was investigated. For this,
the central and the peripheral sections of the calculation area were considered separately, the peripheral section covering the 28 grid positions along the edges, and the
46
Fig. 2. Microphone grid and point sources. The grid spacing is 3 cm and the two coherent
point sources are 6 cm below the array. The left source is 6 cm to the left of the array
050076
central section covering the rest. For each section/area the relative average error
level was calculated from the formula:
2⎞
true
⎛
ui
– ui ⎟
⎜
L err = 10 ⋅ log 10 ⎜ --------------------------------------⎟
true 2 ⎟
⎜
ui
⎝
⎠
∑
∑
(17)
where the summations are both over the relevant section. A consequence of this
definition is that a section with a low level of particle velocity will easier exhibit a
high relative error level. Fig. 3 shows the relative error levels in the measurement
plane for the central area, for the edge and for the total area. For the central area
the average relative error is seen to be lower than –18 dB over the entire calculated
frequency range from 500 Hz to 5 kHz.
Fig. 4 shows the corresponding data for the calculation plane at z = –3 cm.
Clearly, the error level has increased significantly, in particular along the edges,
where the true particle velocity (z-component) is lower than before. The average
error level over the central section/area is still better than around –18 dB up to
3.3 kHz and better than –12 dB up to 5 kHz.
Fig. 5 shows the Sound Power for the central and full sections of the calculation
area at z = –3 cm. For both sections the sound power error is less than 0.2 dB up to
47
Fig. 3. Relative average error level for SONAH calculation of particle velocity in the measurement plane, z=0
Frequency (Hz)
0
500
1000
1500
2000
2500
3000
3500
4000
4500
5000
Relative Error (dB)
0
-5
-10
-15
-20
Edge
Center
Full area
-25
-30
050077
Fig. 4. Relative average error level for SONAH calculation of particle velocity in the calculation plane, z= –3 cm
Frequency (Hz)
0
500
1000
1500
2000
2500
3000
3500
4000
4500
5000
Relative Error (dB)
5
0
-5
-10
-15
-20
Edge
Center
Full area
-25
050065
around 3.5 kHz. Above that frequency the estimated power slowly decreases, probably because the number of microphones is too small to uniquely determine general
sound fields in that frequency range.
Practical Measurement
The SONAH calculation method has been implemented in Brüel & Kjær’s NonStationary STSF software package, which is an implementation of Time Domain
acoustical holography. Here, time histories of sound pressure must be measured
48
Fig. 5. True and estimated sound power for the central and the full sections of the calculation
area at z = –3 cm
Sound power (dB)
41
Full area, True
Full area
Central area, True
Central area
40
39
38
37
36
0
500
1000
1500
2000
2500
3000
Frequency (Hz)
3500
4000
4500
5000
050078
simultaneously at all measurement positions. The holography calculations are performed through an FFT transform of the full time-section to be studied to the frequency domain, followed by NAH or SONAH calculation for each FFT line and
finally inverse FFT transform back to the time domain, [2]. In order to reduce the
calculation time for the SONAH calculations, matrix interpolation was performed
along the frequency axis on the correlation matrices A†A, A†␣ and A†␤. With a
few further efforts to reduce computation time, the SONAH calculations took only
a few times longer than traditional NAH (based on spatial FFT processing) for the
applied 120-element array.
The example to be presented here was a measurement on the side of the steel
track of a large Caterpillar track-type tractor. The main sources of sound radiation
were around the areas where the track passes over the sprocket and around the rear
and front idlers. We took a measurement with a 10 cm spaced 10 × 12 element array
positioned over a small Carrier Roller with a relatively low level of noise radiation.
Fig. 6 shows a picture of the measurement area and plots of the A-weighted, timeaveraged (RMS) particle velocity maps for the frequency band 205–1454 Hz (1/12
octave bands). Clearly, SONAH has a much better ability to suppress spatial window effects than the traditional NAH technique.
49
Fig. 6. Averaged Particle Velocity maps for the 1/12-octave bands 205–1454 Hz, A-weighted
Measurement area
NAH calculation
SONAH calculation
050066
Conclusions
The new Statistically Optimal NAH (SONAH) method has been introduced. This
method performs the plane-to-plane transformation directly in the spatial domain,
avoiding the use of spatial FFT. Careful numerical programming ensures calculation times only slightly longer than FFT based NAH. Numerical simulations and
practical results demonstrate that SONAH opens up a possibility to perform
acoustical holography measurements with an array that is smaller than the source,
and still keep errors at an acceptable level.
References
[1]
Maynard J. D., Williams E. G., Lee Y., “Near-field Acoustical Holography:
I. Theory of Generalized Holography and the Development of NAH”,
J. Acoust. Soc. Am. 78 (4), 1395–1413, October 1985.
[2]
Hald J., “Time Domain Acoustical Holography and Its Applications”,
Sound & Vibration, 16–25, February 2001.
[3]
Steiner R., Hald J., “Near-field Acoustical Holography Without the Errors
and Limitations Caused by the Use of Spatial DFT”, International Journal of
Acoustics and Vibration, 6 (2), June 2001.
[4]
Hald J., “Planar Near-field Acoustical Holography with Arrays Smaller
Than the Sound Source”, Proceedings of ICA 2001.
50
Previously issued numbers of
Brüel & Kjær Technical Review
(Continued from cover page 2)
1 – 1989 STSF — A Unique Technique for Scan Based Near-Field Acoustic
Holography Without Restrictions on Coherence
2 – 1988 Quantifying Draught Risk
1 – 1988 Using Experimental Modal Analysis to Simulate Structural Dynamic
Modifications
Use of Operational Deflection Shapes for Noise Control of Discrete Tones
4 – 1987 Windows to FFT Analysis (Part II)
Acoustic Calibrator for Intensity Measurement Systems
3 – 1987 Windows to FFT Analysis (Part I)
2 – 1987 Recent Developments in Accelerometer Design
Trends in Accelerometer Calibration
1 – 1987 Vibration Monitoring of Machines
4 – 1986 Field Measurements of Sound Insulation with a Battery-Operated Intensity
Analyzer
Pressure Microphones for Intensity Measurements with Significantly
Improved Phase Properties
Measurement of Acoustical Distance between Intensity Probe Microphones
Wind and Turbulence Noise of Turbulence Screen, Nose Cone and Sound
Intensity Probe with Wind Screen
3 – 1986 A Method of Determining the Modal Frequencies of Structures with
Coupled Modes
Improvement to Monoreference Modal Data by Adding an Oblique Degree
of Freedom for the Reference
2 – 1986 Quality in Spectral Match of Photometric Transducers
Guide to Lighting of Urban Areas
1 – 1986 Environmental Noise Measurements
Special technical literature
Brüel & Kjær publishes a variety of technical literature which can be obtained from
your local Brüel & Kjær representative.
The following literature is presently available:
•
•
Catalogues (several languages)
Product Data Sheets (English, German, French,)
Furthermore, back copies of the Technical Review can be supplied as listed above.
Older issues may be obtained provided they are still in stock.
21-03-2005
13:29
Page 1
BV 0057 – 11
ISSN 0007–2621
BV0070-11_TR_Cover2005.qxp
HEADQUARTERS: DK-2850 Nærum · Denmark
Telephone: +45 45 80 05 00 · Fax: +45 45 80 14 05
www.bksv.com · [email protected]
Was this manual useful for you? yes no
Thank you for your participation!

* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project

Download PDF

advertisement