Search for Gamma-ray Lines from Dark Matter with the TOMI YLINEN

Search for Gamma-ray Lines from Dark Matter with the TOMI YLINEN
Search for Gamma-ray Lines
from Dark Matter with the
Fermi Large Area Telescope
TOMI YLINEN
Doctoral Thesis in Physics
Stockholm, Sweden 2010
Doctoral Thesis in Physics
Search for Gamma-ray Lines
from Dark Matter with the
Fermi Large Area Telescope
Tomi Ylinen
Particle and Astroparticle Physics, Department of Physics,
Royal Institute of Technology, SE-106 91 Stockholm, Sweden
Stockholm, Sweden 2010
Cover illustration: The gamma-ray sky as seen by the Fermi Large Area Telescope
after one year of observations.
Akademisk avhandling som med tillstånd av Kungliga Tekniska Högskolan i Stockholm framlägges till offentlig granskning för avläggande av teknologie doktorsexamen måndagen den 7 juni 2010 kl 14.00 i sal FA32, AlbaNova Universitetscentrum,
Roslagstullsbacken 21, Stockholm.
Avhandlingen försvaras på engelska.
ISBN 978-91-7415-672-0
TRITA-FYS 2010:28
ISSN 0280-316X
ISRN KTH/FYS/--10:28--SE
c Tomi Ylinen, May 2010
Printed by Universitetsservice US-AB 2010
Abstract
Dark matter (DM) constitutes one of the most intriguing but so far unresolved
issues in physics. In many extensions of the Standard Model of particle physics,
the existence of a stable Weakly Interacting Massive Particle (WIMP) is predicted.
The WIMP is an excellent DM particle candidate. One of the most interesting
scenarios is the creation of monochromatic gamma-rays from the annihilation or
decay of these particles. This type of signal would represent a “smoking gun” for
DM, since no other known astrophysical process should be able to produce it.
In this thesis, the search for spectral lines with the Large Area Telescope (LAT)
onboard the Fermi Gamma-ray Space Telescope (Fermi ) is presented. The satellite
was successfully launched from Cape Canaveral in Florida, USA, on 11 June, 2008.
The energy resolution and performance of the detector are both key factors in the
search and are investigated here using beam test data, taken at CERN in 2006
with a scaled-down version of the Fermi -LAT instrument. A variety of statistical
methods, based on both hypothesis tests and confidence interval calculations, are
then reviewed and tested in terms of their statistical power and coverage.
A selection of the statistical methods are further developed into peak finding
algorithms and applied to a simulated data set called obssim2, which corresponds to
one year of observations with the Fermi -LAT instrument, and to almost one year of
Fermi -LAT data in the energy range 20–300 GeV. The analysis on Fermi -LAT data
yielded no detection of spectral lines, so limits are placed on the velocity-averaged
cross-section, hσviγX , and the decay lifetime, τγX , and theoretical implications are
discussed.
iii
iv
Contents
Abstract
iii
Contents
v
Introduction
3
1 Particle interactions
1.1 Charged particles . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.2 Photons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.3 Electromagnetic showers . . . . . . . . . . . . . . . . . . . . . . . .
7
7
9
9
2 Gamma-ray astronomy
2.1 Gamma-ray production . . . . .
2.1.1 Thermal gamma-rays . .
2.1.2 Non-thermal gamma-rays
2.2 Gamma-ray sources . . . . . . . .
2.3 Detection techniques . . . . . . .
2.4 History . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
13
13
13
14
15
17
18
3 Dark matter
3.1 Evidence . . . . . . . . .
3.2 Dark matter candidates
3.3 Dark matter properties .
3.4 Halo models . . . . . . .
3.5 Detection techniques . .
3.6 Experimental status . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
25
25
28
30
31
32
33
4 Fermi Gamma-ray Space Telescope
4.1 Scientific goals . . . . . . . . . . .
4.2 Large Area Telescope . . . . . . . .
4.2.1 Tracker . . . . . . . . . . .
4.2.2 Calorimeter . . . . . . . . .
4.2.3 Anti-Coincidence Detector .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
35
36
37
38
39
40
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
v
vi
Contents
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
43
45
46
47
49
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
51
51
52
53
56
6 Beam test analysis
6.1 Analysis approach . . . . . . . . . .
6.2 Creating a clean sample . . . . . . .
6.3 Position reconstruction in the CAL .
6.3.1 Asymmetry curves . . . . . .
6.4 Direction reconstruction in the CAL
6.5 Energy reconstruction in the CAL .
6.5.1 Raw energy distributions . .
6.5.2 Longitudinal profile . . . . .
6.5.3 Energy resolution . . . . . . .
6.6 Latest developments . . . . . . . . .
6.7 Summary and conclusions . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
59
59
61
63
64
70
72
72
72
74
85
85
7 Dark matter line search
7.1 Initial discussions . . . . . . . . . . . . . .
7.1.1 Region-of-interest selection . . . .
7.1.2 Halo profile selection . . . . . . . .
7.1.3 Data selection . . . . . . . . . . . .
7.2 Statistical concepts . . . . . . . . . . . . .
7.2.1 Frequentist and Bayesian statistics
7.2.2 Confidence intervals . . . . . . . .
7.2.3 Hypothesis tests . . . . . . . . . .
7.2.4 Coverage . . . . . . . . . . . . . .
7.2.5 Power . . . . . . . . . . . . . . . .
7.2.6 Significance . . . . . . . . . . . . .
7.3 Statistical methods . . . . . . . . . . . . .
7.3.1 Bayes factor method . . . . . . . .
7.3.2 χ2 method . . . . . . . . . . . . .
7.3.3 Feldman & Cousins . . . . . . . .
7.3.4 Profile likelihood . . . . . . . . . .
7.3.5 Method comparison . . . . . . . .
7.4 Implementations for line search . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
91
91
91
93
93
95
95
96
96
97
97
98
98
98
99
99
100
100
102
4.3
4.2.4 Event reconstruction
4.2.5 On-orbit calibration
4.2.6 Data structure . . .
4.2.7 Performance . . . .
Gamma-ray Burst Monitor
5 Calibration Unit beam test
5.1 Introduction . . . . . . . .
5.2 Calibration Unit . . . . .
5.3 PS facility beam test . . .
5.4 SPS facility beam test . .
.
.
.
.
Contents
7.5
7.6
7.7
7.4.1 Binned ProFinder . . . .
7.4.2 Unbinned ProFinder . . .
7.4.3 Scan Statistics . . . . . .
Application on obssim2 data set
7.5.1 Exposure . . . . . . . . .
7.5.2 Limits . . . . . . . . . . .
Application on Fermi-LAT data .
7.6.1 Exposure . . . . . . . . .
7.6.2 Limits . . . . . . . . . . .
Summary and conclusions . . . .
vii
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
102
104
113
116
120
122
124
128
129
133
8 Discussion and outlook
135
Acknowledgements
137
List of figures
139
List of tables
143
Bibliography
145
viii
To my grandfather,
who always saw the humour of the situation.
Introduction
Gamma-rays are defined as photons that constitute the highest energy region in the
electromagnetic spectrum and have energies above 100 keV. They were discovered
in 1900 by Paul Villard and have since been studied extensively from Earth and
from space.
The fundamental processes that are known to give rise to gamma-rays include
high-energy charged particle interactions with radiation and magnetic fields (inverse
Compton, synchrotron radiation and bremsstrahlung) but also the decay of neutral
pions and the annihilation of an electron with a positron.
In space, gamma-rays are produced through the aforementioned mechanisms in
a large variety of astrophysical objects and since they are unaffected by magnetic
fields, they point directly towards the sources. In asteroids and other circumsolar objects, gamma-rays are produced when high-energy cosmic rays interact with
the rock and ice. Interactions that create gamma-rays also take place in the atmospheres of the Earth and the Sun. More distant gamma-rays are produced on
both galactic and extragalactic scales. On galactic scales, rapidly rotating and
highly magnetised neutron stars (pulsars), interstellar matter and remnants from
supernova explosions give rise to gamma-rays. Further out in space, a variety of active galaxies but also particularly explosive and rapid events, known as gamma-ray
bursts, are known to produce them.
The current understanding of the Universe, supported by a vast number of
observations, suggests that a large portion of its content is dark and invisible. Only
about 5% of the Universe is believed to consist of baryonic matter, whereas roughly
70% is referred to as dark energy and the remaining ∼25% is denoted as dark matter
and is believed to be composed of some form of exotic matter. A multitude of
theories have been constructed to describe the nature of dark matter and a popular
theory proposes that it consists of weakly interacting massive particles or WIMPs.
The WIMPs can, in many cases, annihilate or decay into known Standard Model
particles, including gamma-rays. A “smoking-gun” signal from this kind of dark
matter would be the observation of a spectral line, produced by the annihilation or
decay of WIMPs into two gamma-rays or one gamma-ray and some other particle.
Throughout history, a large number of experiments have studied the Universe
in gamma-rays and the latest addition is the space-based Fermi Gamma-ray Space
Telescope (Fermi ) and its principle instrument, the Large Area Telescope (LAT).
3
4
Contents
The Fermi -LAT instrument has an unprecedented sensitivity and performance
and consists of a precision tracker, which provides the direction of the incident
gamma-ray, a segmented calorimeter, constructed to measure the energy and an
anti-coincidence detector, used to reduce the contamination from charged particles.
The instrument is designed to measure gamma-rays with energies from 20 MeV to
more than 300 GeV.
In this thesis, the proposed spectral line signal is searched for by using gammaray data from the Fermi Gamma-ray Space Telescope. The sensitivity of such a
search depends on the overall performance and understanding of the instrument,
which both rely heavily on the Geant4-based full-detector Monte Carlo simulation,
developed by the Fermi -LAT Collaboration. For this reason, a scaled-down version
of the Fermi -LAT instrument, called the Calibration Unit, was tested at CERN
using beams of photons and charged particles. The analysis of the collected beam
test data, presented in this thesis, is focused on investigating the accuracy of the
Monte Carlo simulation in terms of directional and energy-related observables, due
to their particular importance to the spectral line search.
The spectral line search itself is a statistical analysis, where the contributions
from a signal and a background component of known shapes are calculated from
a simultaneous fit to the data. The properties of the statistical method also play
an important role in the search and can be tested by sets of random realisations
(typically called toy Monte Carlo experiments) of the assumed shapes of the components. The properties that have been investigated in this thesis for a selected set
of statistical methods are the statistical power and coverage.
Outline of the thesis
The first chapters of this thesis contain the theoretical backgrounds relevant to
the performed analyses. In Chapter 1, the various interactions occurring when
particles traverse matter are reviewed. Chapter 2 gives a recapitulation of gammaray astronomy, including a historical overview, and Chapter 3 focuses on providing
a theoretical background to dark matter. In Chapter 4, the Fermi Gamma-ray
Space Telescope and its different subdetectors are explained in detail. Chapters 5
and 6 are devoted to the beam tests performed at CERN and the analysis of the
data obtained there, respectively. Chapter 7 is dedicated to the search for spectral
lines, both on a simulated data set that corresponds to one year of data with the
Fermi -LAT and on almost one year of measured data. The challenges involved in a
line search are also explained and results from benchmark tests, using the statistical
power and coverage, are shown for a number of different statistical methods. Finally,
in Chapter 8, conclusions from and discussions of the analyses are presented.
Contents
5
Author’s contribution
The beam test efforts were performed by a large number of people within the
Fermi -LAT Collaboration. Before the tests, the author helped in assessing the
data requirements for the planned setups. The author then actively participated
in the beam tests at CERN by assisting in the setup and disassembling of the
experiments, taking shifts, analysing and validating the quality of the data and
Monte Carlo simulations and by presenting the results in shift meetings.
In the following overall analysis of the beam test data, the author developed
an event selection and analysed the differences between data and Monte Carlo
simulations in terms of direction, position and energy measurements and frequently
presented the results in online beam-test meetings.
In the dark matter analysis, the author was fully responsible for translating
the statistical methods into tools for searching for dark matter lines. The software
used in the analyses, which utilise the ROOT and Science Tools frameworks, were
written by the author. The DarkSUSY simulation package was also adapted by the
author to calculate the line-of-sight integral over a specific region of the sky.
In the benchmark studies of the statistical power and coverage for different
statistical methods, the implementation and results from the frequentist methods
were produced by the author.
The development, implementation and execution of the spectral line search on
obssim2 data was done by the author. In the spectral line search on measured
Fermi -LAT data, the unbinned profile likelihood was implemented and tested by
the author. The author, furthermore, calculated the upper limits on the flux from
dark matter annihilations or decays, which were subsequently published in Physical
Review Letters.
Publications
The author has given direct and significant contributions to the following publications and proceedings:
• A.A. Abdo ... T. Ylinen, et al., Fermi Large Area Telescope Search for Photon
Lines from 30 to 200 GeV and Dark Matter Implications, Physical Review
Letters, 104 (2010) 091302, [arXiv:astro-ph/1001.4836].
• T. Ylinen, Y. Edmonds, E. D. Bloom & J. Conrad, Dark Matter annihilation
lines with the Fermi-LAT, Proceedings of the 31st ICRC, Lódź, 2009.
• T. Ylinen, Y. Edmonds, E. D. Bloom & J. Conrad, Detecting Dark Matter
annihilation lines with Fermi, Proceedings of “Identification of Dark Matter
2008”, Stockholm, Sweden, p.111, [arXiv:astro-ph/0812.2853].
• J. Conrad, J. Scargle & T. Ylinen, Statistical analysis of detection of, and
upper limits on, dark matter lines, AIP Conf. Proc., 921 (2007) 586.
6
Contents
• L. Baldini ... T. Ylinen, et. al, Preliminary results of the LAT Calibration
Unit beam tests, AIP Conf. Proc., 921 (2007) 190.
The author is at the time of writing also co-author on another 59 Fermi -LAT
Collaboration papers and 2 PoGOLite Collaboration papers.
Chapter 1
Particle interactions
This chapter reviews some of physical processes involved when the particles investigated in Chapter 6 interact with matter. For a more detailed review including
mathematical descriptions, see e.g. [1].
1.1
Charged particles
In Fig. 1.1, the stopping power for positive muons in copper is shown over nine
orders of magnitude in momentum. The plot is divided into different regions, where
different effects dominate the interactions taking place.
Figure 1.1. The average energy loss of positive muons in copper as a function of
the muon momentum (from [1]).
7
8
Chapter 1. Particle interactions
Below about 0.7 MeV, non-ionising nuclear recoil energy losses dominate the
total energy loss for e.g. protons. In the same region, Lindhard and Sharff have
described the stopping power as proportional to β = v/c [2]. For 1–5 MeV in
the second region, no satisfactory theory exists. For protons, however, there are
phenomenological fitting formulae developed by Anderson and Ziegler [3].
Above about 7 MeV, the so-called “Barkas effect” yields a stopping power that
is somewhat larger for negative particles than for positive particles with the same
mass and velocity [4]. Overall, however, the stopping power is well described by the
Bethe-Bloch equation and the particles lose their energy mainly through ionisation
and atomic excitation. Due to the muon spectrum at sea level, in which most of the
muons have an energy that is around the minimum of the Bethe-Bloch function,
muons are often referred to as minimum-ionising particles.
Radiative energy losses, composed of bremsstrahlung, e+ e− pair production and
photonuclear interactions, become dominating above roughly 70 GeV. In the figure,
Eµc represents the critical energy at which point ionisation and radiative losses are
equal.
In every ionisation event, one or more energetic electrons are typically knocked
out from atoms in matter. If the energy of the ejected electron is much larger than
the ionisation potential, they are called delta electrons or δ-rays. Delta electrons
with high energies are, however, very rare. For a particle with β ≈ 1, only one
collision where the kinetic energy of the delta electron is larger than 1 keV will on
average occur along a path of 90 cm in Ar gas [1].
For electrons and positrons, Fig. 1.2 shows the fractional energy loss per radiation length in lead as a function of the electron or positron energy. The low
energy part, below about 7 MeV in lead, is dominated by ionisation although other
smaller effects, namely Møller scattering, Bhabha scattering and e+ annihilation,
contribute. Above a few tens of MeV, bremsstrahlung is completely dominating in
most materials.
Two additional processes, which are not as important for energy loss are Cherenkov and transition radiation. Cherenkov radiation is produced when the velocity of
the particle is greater than the local phase velocity for light in the specific medium.
The emission is characterised by an angle, θc , relative to the direction of the particle, which depends on the velocity of the particle and the refractive index of the
medium. Transition radiation, on the other hand, is emitted when a charged particle crosses from one medium to another and the two media have different optical
properties.
An important process that occurs when charged particles traverse a medium
is called multiple Coulomb scattering. This broadens distributions for direction
measurements, because the charged particles are deflected by many small angle
scatters. Most of these deflections are Coulomb scatterings, and the distribution
of deflections is roughly Gaussian for small angles and with larger tails than a
Gaussian for larger angles.
1.3. Electromagnetic showers
9
Figure 1.2. The fractional energy loss per radiation length in lead as a function of
electron or positron energy (from [1]).
1.2
Photons
In Fig. 1.3, the cross-sections of the different processes involved in photon-matter
interactions are shown. The cross-sections depend on the material and the figure
is an example plot for photons interacting in lead.
At low energies (for lead below about 500 keV), the cross-section for the atomic
photoelectric effect, σp.e. is dominating. In the photoelectric effect, a photon is
absorbed by an atom and followed by the emission of an electron. Another process
at low energies, which is not as probable as the photoelectric effect is Rayleigh
scattering, σRayleigh , where a photon is scattered by an atom without ionising or
exciting the atom.
At higher energies, Compton scattering, σCompton , in which photons are scattered by electrons at rest, is the dominating process but photonuclear interaction
such as the Giant Dipole Resonance, σg.d.r. , where the target nucleus is broken up,
also contributes. In lead, this region ranges from about 500 keV to about 5 MeV.
In the high end of the energy range (in lead above about 5 MeV), pair production
in nuclear (κnuc ) and electron (κe ) fields is completely dominating.
1.3
Electromagnetic showers
A high-energy electron or photon that interacts with a thick absorber gives rise
to a cascade of pair productions from photons and bremsstrahlung photons from
the pair-produced electrons and positrons. The longitudinal development of the
resulting electromagnetic shower, shown in Fig. 1.4, scales as the radiation length
in the absorber. When the energies of the electrons and positrons fall below the
10
Chapter 1. Particle interactions
Figure 1.3. The cross-sections of the photoelectric effect, Rayleigh- and Compton
scattering, pair production in nuclear and electron fields and photonuclear interactions as a function of photon energy in lead (from [1]).
critical energy, Ec , where the ionisation loss rate is equal to the bremsstrahlung loss
rate, additional shower particles are no longer produced and the energy dissipation
is then provided by ionisation and excitation.
Figure 1.4. A simulation of an electromagnetic shower from a 50 GeV photon
(from [5]).
Electromagnetic showers are often described by introducing the scale variables
t = x/X0 and y = E/Ec , in which case the longitudinal distance is measured in
1.3. Electromagnetic showers
11
units of radiation length, X0 , and the energy is described in units of the critical
energy. One radiation length (which depends on the atomic number Z) is defined
as a characteristic mean distance in which a high-energy electron loses all but 1/e
of its energy through bremsstrahlung and a high-energy photon propagates 7/9 of
the mean free path for pair production. With this notation, the mean longitudinal
profile of the energy deposition can be fitted reasonably well with a gamma function,
given in Eq. 1.1:
(bt)a−1 e−bt
dE
= E0 b
dt
Γ(a)
(1.1)
According to EGS4 simulations, the maximum occurs at tmax = (a − 1)/b =
1.0 × (ln y + Cj ), where j = e, γ, a and b are free parameters and Ce = −0.5 for
electron-induced showers and Cγ = +0.5 for photon-induced showers [1].
12
Chapter 2
Gamma-ray astronomy
This chapter contains an introduction to gamma-ray astronomy. First, the different mechanisms, in which cosmic gamma-rays are produced, is reviewed. This is
followed by a short description of the gamma-ray emitting astrophysical sources
and the main techniques used to observe them. Finally a historical overview of
gamma-ray astronomy is provided.
2.1
Gamma-ray production
Gamma-rays are generally defined as photons that have energies greater than about
100 keV. There are a number of different processes in which astronomical objects can
produce them but the mechanisms are either thermal or non-thermal. A thorough
review of the different forms of production can be found in e.g. [6].
2.1.1
Thermal gamma-rays
A body with a temperature that is different from zero will emit thermal radiation.
If the body is a perfect absorber in thermal equilibrium with its environment at
temperature T , i.e. a black-body, the energy-dependent intensity of photons is
governed by the Planck formula in Eq. 2.1,
3
2Eph
I(Eph ) =
(hc)2
"
1
Eph
e kB T − 1
#
,
(2.1)
where h and kB are the Planck and Boltzmann’s constants, respectively, and c is
the speed of light. The average energy of the photons is given by Eq 2.2.
13
14
Chapter 2. Gamma-ray astronomy
hEthermal i ≈ 2.3 × 10−10
T
K
MeV
(2.2)
In order to get thermal photons at an average energy of 1 GeV, temperatures of
about 1013 K are needed. Such temperatures are only reached in the Big Bang. In
addition, that temperature level implies such a large photon density that the mean
free path for the photons is less than 1 cm. This leads to self-absorption by pair
production. Typical astrophysical gamma-ray sources are therefore non-thermal in
nature.
2.1.2
Non-thermal gamma-rays
For gamma-rays that are produced non-thermally, a distinction can be made between gamma-rays from particle-field interactions and gamma-rays from particlematter interactions. The first category includes the following processes:
• Synchrotron radiation, which is created when relativistic charged particles
move in a magnetic field. The energy loss rate of an electron moving in a
helical path around a magnetic field B is then given by:
−
dEe
dt
syn
2
= c
3
e2
me c2
2
2 2
B⊥
γ ,
(2.3)
where e is the electron charge, me is the electron mass, B⊥ = B sin θ where
θ is the pitch angle and γ = Ee /me c2 is the Lorentz factor.
• Curvature radiation, which occurs when the magnetic field that the charged
particle moves in is non-uniform and the curvature radius, Rc , of the magnetic
field line is small. The energy loss is then given by:
−
dEe
dt
=
curv
2 ce2 4
γ
3 Rc2
(2.4)
• Inverse Compton (IC) interactions, which refer to the scattering of relativistic
electrons on soft photons, where the energy transfer to the photon gives the
photon an energy in the gamma-ray region. In the classical limit, the average
energy of the emerging photon is hEIC,γ i = (4/3) hEγ i γ 2 , where hEγ i is the
average energy of the target photon. In the relativistic case, most of the
energy of the electron is transferred to the photon and EIC,γ ≈ Ee .
The second category with particle-matter interactions consists of:
• Relativistic bremsstrahlung, which is produced when relativistic electrons are
accelerated in the electrostatic field of a nucleus.
2.2. Gamma-ray sources
15
• Hadronic gamma-ray emission, where gamma-rays are produced via the decay
of neutral pions (π 0 ), which have a proper lifetime of 9 × 10−17 s. The
neutral pions are created through a number of different channels of proton
and antiproton interactions.
• Electron-positron annihilations, in which gamma-rays are produced through
the reaction e+ + e− → γ + γ. If the electron and the positron are at rest,
the photons will have an energy equal to the rest mass of the electron, i.e.
0.511 MeV. If one of the leptons is moving at a high velocity, one of the
photons will have a high energy and the other photon will have an energy of
about 0.511 MeV.
• Dark matter annihilations/decays. Many extensions of the Standard Model
of particle physics predict the existence of dark matter particles, which selfannihilate or decay and produces either gamma-rays directly or indirectly
through the decay of the Standard Model particles produced in the process. In
many of these models, the dominant final states are quarks and gauge bosons
(W/Z), which through hadronisations create π 0 particles that decay into two
gamma-rays. Also leptonic final state models (e.g. into µ+ µ− ) have been
suggested to fit recent cosmic-ray electron and position measurements (see
also Section 3.6). The gamma-rays can then be produced via IC processes but
also via internal bremsstrahlung, where an additional photon is emitted in the
final state [7]. Many models also allow for direct channels into monoenergetic
gamma-rays. The possibility of gamma-rays from dark matter has, however,
not yet been experimentally verified. The subject of dark matter is covered
in more detail in Chapter 3.
2.2
Gamma-ray sources
The number of sources that produce cosmic gamma-rays come in great numbers.
They are located in all distance scales and include:
• Circumsolar sources
It can be deduced from data taken with the Energetic Gamma-Ray Experiment Telescope (EGRET), described in Section 2.4, that albedo gamma-rays
are created in small solar system bodies in the main belt asteroids between
Mars and Jupiter, the Jovian and Neptunian Trojans and in the Kuiper Belt
objects beyond Neptune through the interaction of cosmic-rays with the solid
rock and ice [8]. The diffuse emission from these objects has an integrated
flux of less than ∼ 6 × 10−6 cm−2 s−1 in the energy range 100–500 MeV.
This is about 12 times the gamma-ray flux from the Moon, where the same
process occurs. Studies have also been conducted with the successor Fermi
Gamma-ray Space Telescope and the preliminary results for the Moon[9] are
in general agreement with EGRET observations. Strong albedo gamma-ray
16
Chapter 2. Gamma-ray astronomy
emission due to cosmic-ray interactions with the Earth’s atmosphere has also
been observed by both EGRET [10] and the Fermi Gamma-ray Space Telescope [11].
• The Sun
The Sun is expected to emit gamma-rays due to IC scattering of solar optical
photons by GeV-energy cosmic-ray electrons as well as hadronic interactions
of cosmic-rays in the solar atmosphere and photosphere. No significant excess was, however, initially found in the direction of the Sun with EGRET
data and an upper limit was put on the flux above 100 MeV [12]. In an updated and improved analysis, gamma-rays in the halo around the Sun could
be detected with a total flux above 100 MeV of 4.4 × 10−7 cm−2 s−1 [13].
The emission has also been seen with the Fermi Gamma-ray Space Telescope
and the preliminary results are in general agreement with EGRET measurements [14].
• Galactic sources
Within the Milky Way galaxy, there are a number of sources that can emit
gamma-rays. The galactic diffuse emission, concentrated in the galactic plane,
consists of three components: truly diffuse emission from high energy particle
interactions with the interstellar gas and radiation fields and unresolved and
faint galactic point sources [15]. Pulsars are rapidly rotating and highly magnetised neutron stars that emit radiation in multiple wavelengths, some even
at gamma-ray energies [16]. The gamma-ray pulsars exhibit light-curves with
a double-pulse structure, which is different from pulsars at lower energies.
They also tend to be younger than other pulsars and have higher magnetic
fields. Different models exist as to how the emission is created. Supernova
remnants are created when blast waves and reverse waves from supernova explosions propagate in the surrounding medium. The shock waves are thought
to accelerate particles up to relativistic energies via Fermi acceleration [17, 18]
and the resulting extended sources gamma-rays through a variety of mechanisms [19, 20]. Another category are microquasars, which are X-ray binaries
with associated jets in which high-velocity relativistic shocks are believed to
give rise to high-energy gamma-rays [21].
• Extragalactic sources
There are also many potential extragalactic sources of gamma-rays. In the
Third EGRET Catalogue (see Section 2.4), there are tens of sources classified
as Active Galactic Nuclei (AGNs). The gamma-ray emission in these objects
is believed to originate in the relativistic jets associated with the AGNs, but
what causes the emission is still under debate. A first catalogue of AGNs has
also been created using Fermi -LAT measurements [22]. In the EGRET catalogue, there are also 120 unidentified sources above |b| > 10◦ . The potential
nature of these sources include other active galaxies such as blazars, BL Lacs,
2.3. Detection techniques
17
starforming galaxies, but also clusters of galaxies and the isotropic diffuse extragalactic background which has also been measured by the Fermi -LAT [23].
Another type of objects are Gamma-Ray Bursts (GRBs). They are characterised by a sudden and rapid enhancement of gamma-rays from space. Since
the discovery of the first GRB in 1967, several thousand have been detected,
isotropically distributed over the sky. The X-ray and radio afterglows from
the GRBs have led to the discovery of host galaxies with large redshifts. This
places GRBs at cosmological, rather than at galactic, distances.
• Dark matter
As mentioned in the previous section, dark matter is a possible source of
gamma-rays. The evidence for dark matter is today overwhelming, but its
nature remains largely unexplored. The field is, however, highly active and,
as explained further in Chapter 3, there are a large number of theories for its
particle nature and spatial distribution.
2.3
Detection techniques
A great advantage with gamma-ray measurements as compared to charged particles
is that gamma-rays are not deflected by the various magnetic fields present in the
Universe and therefore point directly to the source of the emission, whereas charged
particles are deflected and therefore undergo diffusion in different directions before
reaching us.
As explained in Chapter 1, gamma-ray interactions are dominated by pairproduction and the subsequent production of electromagnetic showers above a certain material dependent threshold energy. At these energies, two different kinds of
detection techniques are currently used.
The first technique is based on detecting the primary photon and the shower particles it produces via pair production of photons and bremsstrahlung from charged
particles. These detectors are either balloon-based or space-based, since the Earth’s
atmosphere absorbs most of the shower. Due to the limited size of the detectors
that can be sent up in balloons or satellites, a large fraction of the shower from
a high energy photon will leak out of the detector. The longitudinal size of the
detector therefore sets a natural maximum energy that can be measured with these
instruments. This occurs roughly when the maximum of the shower is outside the
detector.
The Earth’s atmosphere acts naturally as a gigantic calorimeter and this can
be used to detect gamma-rays indirectly in ground-based instruments. The second
technique is therefore based on looking for the Cherenkov light that is sent out
from the charged particles produced in the electromagnetic showers (see also Section 1.1). These so-called Cherenkov telescopes can typically only measure gammarays of several tens of GeV and above, since showers from lower energy gamma-rays
are absorbed high up in the atmosphere. The energies that are measured by these
18
Chapter 2. Gamma-ray astronomy
telescopes are therefore typically much higher than those normally measured by
balloon- or space-based detectors. There are currently many ground-based telescopes of this kind looking for gamma-rays. The most well known of these are
H.E.S.S. [24], MAGIC [25], VERITAS [26] and CANGAROO-III [27]. The individual designs of these instruments are beyond the scope of this thesis but the
interested reader can find an overview in e.g. [28].
A technique related to the one used by the ground-based Cherenkov telescopes
is to measure the Cherenkov light emitted by the charged particles in the showers
in large pools or tanks of water on the ground. This techniques is optimal for even
higher-energy particles than the ones measured by the ground-based Cherenkov
telescopes, since the shower has to penetrate more atmosphere and reach ground
level. HAWK [29] and its predecessor Milagro [30] are two examples of experiments
utilising this technique.
The gamma-ray sensitivities and energy ranges of the various experiments above
are shown in Fig. 2.1.
Figure 2.1. The sensitivities and energy ranges of the various experiments measuring gamma-rays.
2.4
History
This section largely follows the more detailed historical overview given in [6].
Until the early 1960s, detectors were not sufficiently sophisticated to be able
to detect gamma-rays from space. The discovery of the gamma-ray was, however,
made much earlier by Paul Villard in 1900 [31, 32]. Villard saw that gamma-rays
2.4. History
19
were an especially penetrating form of radiation that was unaffected by electric and
magnetic fields.
Fourteen years later, in 1914, gamma-rays were after diffraction experiments
by Rutherford and Andrade revealed to be a form of light with a much shorter
wavelength than X-rays [33, 34]. The first link between gamma-rays and interstellar
space was suggested by Millikan and Cameron, who studied cosmic-rays extensively.
In 1931, they suggested that cosmic-rays were in fact photons and that they came
from interstellar space (rather than from the atmospheres of stars) [35]. Cosmic
gamma-ray sources were investigated further also by others, but the idea was then
abandoned.
The concept revitalized in the early 1950s, after the discovery of the neutral pion.
The earliest contributions came from Feenberg and Primakoff in 1948 [36]. In 1952,
Hayakawa predicted that when cosmic-rays collide with interstellar matter, gammarays should be produced from the decay of neutral pions [37]. The same year,
Hutchinson estimated the gamma-ray emission from cosmic bremsstrahlung [38].
Six years later, in 1958, Morrison estimated the gamma-ray flux from many different
astronomical objects [39].
Early gamma-ray detectors suffered from a bad background rejection and were
in addition not sensitive enough. The first detector to reliably measure gammarays from space was the Explorer-XI satellite, which was launched in 1961. In
the Explorer-XI instrument, shown in Fig. 2.2, gamma-rays were converted into
electron-positron pairs in a crystal scintillator that consisted of alternating slabs
of CsI and NaI. Signals from the scintillator were in coincidence with a Cherenkov
detector and were read out if there was no recorded event in the plastic anticoincidence detector. After analysing the recorded data with 127 potential gamma-rays,
22 events remained with a celestial origin whereas the rest were most likely secondary gamma-rays from cosmic-rays interacting in the Earth’s atmosphere [40].
The next important detector for gamma-rays to be launched was the Orbiting
Solar Observatory (OSO) III in 1967. The gamma-ray instrument onboard consisted of a converter sandwich of CsI crystals and plastic scintillators, a directional
Cherenkov counter and an energy detector with layers of NaI and tungsten, surrounded by an anticoincidence shield of plastic scintillators [41]. The instrument
was sensitive to gamma-rays above 50 MeV and recorded 621 events concentrated
along the galactic equator [42].
The same year OSO III sent its last data transmission, a series of military
satellites called Vela was launched. They were initially constructed to detect nuclear
explosions from space but also detected the first transient sources of gamma-rays,
later known as GRBs [43]. Vela 5A and 5B, launched in 1969, and Vela 6A and
6B, launched in 1970, recorded 73 bursts altogether with the gamma-ray detectors
onboard [44]. The detectors consisted of CsI crystals with a total volume of about
60 cm3 and had an energy range of 150–750 keV.
More GRBs were detected later in the late 1970s and early 1980s by e.g. the
Pioneer Venus Orbiter and Venera satellites, which were sent to Venus, and the
Prognoz satellites.
20
Chapter 2. Gamma-ray astronomy
Figure 2.2. A sketch of the detector on the Explorer XI satellite (from [40]).
In the early 1970s, spark chamber technology spawned. Spark chambers consist
of layers of a high-Z material, e.g. tungsten, in a chamber of gas, usually neon
or argon. The choice of material in the plates is important, since the interaction
probability is proportional to Z2 . In a spark chamber, the plates are alternatingly
grounded and at a high voltage and when a particle enters the gas chamber, the
gas is ionised and sparks are produced between the plates in the location of the
particle trail. The sparks can be recorded and, thus, the direction of the incoming
particle can be determined.
The first satellite to successfully utilise the technology was the Small Astronomical Satellite (SAS) II, which was launched in 1972The SAS II detector system
consisted of 32 modules of wire spark chambers, 16 on either side of four central
plastic scintillators. Interleaved between each module were thin tungsten plates,
serving as conversion planes for the incoming gamma-rays. The directions of the
gamma-rays were measured by the spark chambers and the energy was determined
by measuring the Coulomb scattering. At the bottom of the instrument were four
directional Cherenkov detectors used for triggering and surrounding the whole instrument was a single-piece plastic scintillator dome, which was used for charge
particle discrimination. The different components onboard the SAS II instrument
can be seen in Fig. 2.3.
SAS II recorded approximately 8000 photons with E >30 MeV during roughly
seven months before a failure in its power supply ended the data collection. The
satellite gave the first detailed view of the gamma-ray sky. These images showed
that the flux was concentrated in the galactic plane and the galactic centre [45].
2.4. History
21
SAS II also established that there were objects, other than the Milky Way or the
Sun, which emitted gamma-rays, namely pulsars. Intensity peaks, coincident with
the Crab and Vela pulsars, were found and an unidentified object, later known as
the Geminga pulsar, was discovered.
Figure 2.3.
(from [45]).
A sketch of the spark chamber-based detector system on SAS II
A few years later, in 1975, the COS-B satellite was launched. The detector
system was similar to the one used in SAS II [46]. The major difference to SAS
II was that COS-B was put in a highly eccentric orbit, taking it further out from
the background radiation produced by the Earth’s atmosphere. In total, COS-B
detected about 200,000 photons during its seven year mission and provided maps of
the gamma-ray sky in energy bands ranging from 300 MeV to 5 GeV. A catalogue
containing 25 sources was also published, 20 of which were unknown [47].
In 1991, the heaviest scientific instrument ever deployed from a space shuttle, the Compton Gamma-Ray Observatory (CGRO), was put in orbit by NASA.
The satellite carried four instruments, the Burst And Transient Source Experiment
(BATSE), the Oriented Scintillation Spectrometer Experiment (OSSE), the Imaging Compton Telescope (COMPTEL) and the Energetic Gamma-Ray Experiment
Telescope (EGRET).
BATSE consisted of 8 thin scintillation modules placed in each corner of the
satellite and was designed to detect transient sources of soft gamma-rays. It
recorded in total 2704 GRBs, 1192 solar flares, 1717 magnetospheric events, 185
soft gamma-ray repeaters (objects characterised by large bursts of gamma-rays and
X-rays at irregular intervals), and 2003 transient sources. The GRBs were isotropically distributed, which suggested that they were extragalactic in origin.
The OSSE detector had four independent phoswich modules (optically coupled
scintillators with dissimilar pulse shapes) consisting of NaI(Tl) and CsI(Na). It
was designed to observe nuclear-line emission from low-energy gamma-ray sources
in the energy range 0.05–10 MeV. The measurements performed by OSSE of the
22
Chapter 2. Gamma-ray astronomy
galactic centre at 511 keV, the energy of photons from electron-positron annihilations, showed that the radiation was concentrated within 10 degrees from the
galactic centre.
COMPTEL had an energy range of 0.8–30 MeV, given by two detector arrays
located 1.5 m from each other, the upper one made of a low-Z liquid scintillator
NE213 and the lower one of a high-Z NaI(Tl) scintillator [48]. The whole detector
was surrounded by a plastic scintillator dome, used to reject charged particles. The
instrument was calibrated using two small plastic scintillator detectors containing
weak 60 Co sources, located on the sides of the telescope. An incident gamma-ray
was Compton-scattered in the upper array and then interacted in the lower array.
The energy losses were measured in the two arrays and determined a circle, which
gave the possible directions of the incoming gamma-ray. From COMPTEL measurements, sky maps and a catalogue containing 63 gamma-ray sources, with AGNs,
pulsars, galactic black-hole candidates, GRBs and supernova remnants, could be
produced [49].
EGRET was based on spark chamber technology and had many similarities
with SAS II. A diagram, showing the detector system can be seen in Fig. 2.4.
The instrument consisted of two modules of wire spark chambers with interspersed
conversion material (tantalum foils) for direction determination, interleaved with a
time-of-flight system for triggering events from the proper incoming direction [50].
The upper spark chamber module had 28 closely separated wire grids and the lower
spark chamber had 8 wire grids more widely separated.
Figure 2.4. A sketch of EGRET detector system (from [50]).
The particle energies were measured with a Total Absorption Spectrometer
Crystal (TASC) made from NaI and was located in the bottom. As in most previous gamma-ray telescopes, a single-piece plastic scintillator dome covered most of
the instrument and was used to discriminate against charged particles.
2.4. History
23
The energy range of EGRET extended from about 20 MeV to roughly 30 GeV
and in most of this region the energy resolution was 20–25%. The effective area
was energy dependent: about 1000 cm2 at 150 MeV, 1500 cm2 in the energy range
0.5–1 GeV and gradually decreasing for higher energies to about 700 cm2 at 10 GeV
for targets near the centre of the field-of-view.
EGRET was a very successful mission and spawned many all sky maps as well
as detailed studies of different sources. In the final official list of EGRET Sources,
the Third EGRET Catalog, 271 excesses with a significance higher than 3σ were
included [51]. About 70 of the sources included in the list have been identified as
AGNs, radio quasars (mostly with a flat-spectrum) and BL-Lacertae, 1 radio galaxy
(Centaurus A), the Large Magellanic Cloud (LMC), and 6 gamma-ray pulsars. The
remaining 170 sources were unidentified. A plot of the sources from the Third
EGRET Catalog in galactic coordinates can be seen in Fig. 2.5.
Figure 2.5. Sources from the Third EGRET Catalog, shown in galactic coordinates.
The size of the symbol corresponds to the highest intensity seen for the source by
EGRET (from [51]).
On April, 2007, the Astro-rivelatore Gamma a Immagini LEggero (AGILE)
satellite was launched into orbit [52]. The instrument weighs only about 120 kg, but
the components differ in design compared to previous experiments. The satellite
carries two instruments, a gamma-ray imager and a hard X-ray imager. At the
top is the Super-AGILE hard X-ray detector, which has an angular resolution of
6 arcmin and the energy range 18–60 keV. The system is a so-called coded-mask
design with a thin shadowing tungsten mask, 14 cm above a silicon detector plane.
The gamma-ray imager covers energies from 30 MeV to 50 GeV and consists of a
Silicon Tracker (ST) module, directly below Super-AGILE, and a Mini-Calorimeter
(MCAL). The ST has high-resolution silicon microstrip detectors organised in 12
layers at 1.9 cm intervals and with interleaved tungsten conversion planes between
24
Chapter 2. Gamma-ray astronomy
the 10 uppermost layers. The ST contains in total 0.8X0 on-axis and provides
the direction of the gamma-rays. Below the ST is the MCAL. It is used for energy
measurements and contains 30 CsI(Tl) crystals in 2 layers (corresponding to 1.5X0 ).
All subdetectors are covered by an anticoincidence (AC) system, where each side
is segmented into three plastic scintillators whereas the top has a single plastic
scintillator layer.
The first AGILE catalogue of high-confidence gamma-ray sources found after
one year of observations contains 47 sources, of which 8 are unidentified [53].
The AGILE satellite is designed to be complementary to the much larger Fermi
Gamma-ray Space Telescope (described in detail in Chapter 4). The detector designs are virtually identical but differ in scale. Fermi Gamma-ray Space Telescope
will, however, in the first phase of the mission perform an all-sky survey, whereas
AGILE is focused on fixed-pointing observations.
In Fig. 2.6, the sources with larger than 4σ significance that have been found
with the first 11 months of data from the Fermi Gamma-ray Space Telescope are
shown [54]. The total number of sources in this catalogue, which is also called
the First Fermi -LAT catalog (or 1FGL, from 1st Fermi Gamma-ray LAT) is 1451
and include starburst galaxies, AGNs, pulsars (PSR), pulsar wind nebulae (PWN),
supernova remnants (SNR), x-ray binary stars (HXB) and micro-quasars (MQO).
Currently, 630 of the sources are categorised as unassociated.
Figure 2.6. Sources with more than 4σ significance in the First Fermi-LAT Catalog.
Chapter 3
Dark matter
This chapter provides an overview of dark matter. It reviews some of the evidence
supporting the existence of dark matter, what constraints dark matter particle
candidates have and the different approaches that are followed today to detect
them. There are many review papers about dark matter available, see e.g. [55]
and [56]. This chapter will therefore only summarise the subject.
3.1
Evidence
The existence of dark matter (DM) was first suggested by Zwicky in 1933 [57].
Zwicky investigated the radial velocities of eight galaxies in the Coma galaxy cluster
and observed an unexpectedly large velocity dispersion. He suggested that the mass
of the visible matter was not enough to hold the cluster together and that “dark
matter” was required [58].
That luminous objects move faster than what would be expected if the only
influence was the gravitational pull from visible matter has since been observed in
many different types of objects. These objects include stars, gas clouds, globular
clusters and entire galaxies. A typical example, which serves as one of the more
compelling and direct evidences for the existence of DM, is the rotation curves of
galaxies.
An p
object that moves in a Keplerian orbit at radius r has a velocity given by
v(r) = GM (r)/r, where M (r) is the mass contained within the disk at radius r.
At larger √
distances, beyond the optical disc, the rotational velocity should fall as
v(r) ∝ 1/ r. Observations of the 21 cm excitation line from hydrogen, however,
show that v(r) is approximately constant. This implies that either there is particle
DM in the form of a halo with M (r) ∝ r or the gravitational theory needs to be
revised.
25
26
Chapter 3. Dark matter
Since these discoveries were made, many other observations have pointed to
the existence of DM, and these include among others the Big Bang Nucleosynthesis (BBN) [59], gravitational lensing [60] and the cosmic microwave background
(CMB) [61]. The most visual evidence of DM today, shown in Fig. 3.1, is from the
merging galaxy cluster 1E 0657-558 (“Bullet Cluster”), where a clear separation
of the mass (determined from gravitational lensing with the Advanced Camera for
Surveys on the Hubble Space Telescope) and the X-ray emitting plasma (observed
with Chandra) can be seen [62].
Figure 3.1. A picture of the Bullet cluster, where the mass determined from gravitational lensing (blue) and the X-ray emitting plasma (purple) are clearly separated.
Courtesy: X-ray: NASA/CXC/M.Markevitch et al. Optical: NASA/STScI; Magellan/U. Arizona/D. Clowe et al. Lensing Map: NASA/STScI; ESO WFI; Magellan/U. Arizona/D. Clowe et al.
Together, all the observations mentioned above have constrained the fractions
of the energy density in the Universe in the form of matter and in the form of
a cosmological constant to ΩM ∼ 0.3 and ΩΛ ∼ 0.7, respectively, with ordinary
baryonic matter only constituting about ΩB ∼ 0.05 [63]. This implies that nonbaryonic matter is the dominating form of matter in the Universe.
The model that has been been favoured for a long time and that is in reasonable agreement with observations is the so-called ΛCDM model, which features
long-lived and collisionless Cold Dark Matter (CDM) and a contribution from a
cosmological constant (Λ). In this context, long-lived refers to a lifetime that is
comparable to or greater than the age of the Universe, collisionless means that the
interaction cross-section of the DM particles is negligible for the expected densities of DM halos and cold means that the DM particles are non-relativistic when
the Universe became matter-dominated. The latter means that the particles could
immediately start to cluster gravitationally.
The collisionless CDM paradigm is, however, not without problems. The proposed ΛCDM model fits well with observations on large scales (≫ 1 Mpc), but
3.1. Evidence
27
there are discrepancies at smaller scales. One of these is generally referred to as
the “cuspy halo problem” or “cusp/core problem” and refers to the inconsistency
between the cuspy halo DM density towards galactic centres predicted from cosmological numerical simulations and the observed densities of the central regions of
self-gravitating systems such as clusters of galaxies [60], spiral galaxies [64], dwarf
galaxies [65] and some low surface-brightness galaxies [66].
Another reported problem is that the predicted number of substructures, i.e.
small halos and dwarf galaxies in orbit around larger objects, is larger than what
is observed [67].
To resolve the existing problems, a number of alternative models of DM have
therefore been proposed and these include strongly self-interacting dark matter,
warm dark matter, repulsive dark matter, fuzzy dark matter, self-annihilating dark
matter, decaying dark matter and massive black holes [68].
It may be noted, that there is currently no consensus whether the aforementioned problems are astrophysical or computational. On large scales, gravity is
the dominating process and computations therefore only involve Newton’s and Einstein’s laws of gravity. On smaller scales, however, the physical interactions between
dark matter, ordinary matter and radiation are important. However, in most of the
latest N-body simulations that predict the internal structure of galactic size halos,
these interactions are neglected for computational reasons.
An alternative theoretical approach that requires no DM are models of Modified Newtonian Dynamics (MOND) [69, 70]. The proposed theories are successful
in explaining some of the observations but not all of them. In particular, the measurements of the Bullet Cluster, mentioned above, are currently difficult to explain
with MOND and related theories.
An attractive candidate for CDM are Weakly Interacting Massive Particles
(WIMPs), since they naturally provide the correct present-day relic abundance of
DM. The reason is that if the particles interact via the weak force, then the WIMPs
were in thermal equilibrium with the Standard Model particles in the early Universe
when the temperature was above the mass of the WIMP. When the temperature
dropped below the mass of the WIMP, the number density of WIMPs decreased
exponentially and finally when the expansion rate became larger than the annihilation rate, the annihilations ceased to occur and the cosmological abundance of
WIMPs “freezed out”.
A strict calculation of the relic density can be very complicated depending on
the model but a rough estimate of the relic abundance is given by Eq. 3.1 [71], which
is independent of the WIMP mass provided that the WIMPs are non-relativistic at
freeze-out.
Ωh2 ≈
3 × 10−27 cm3 s−1
hσvi
(3.1)
Here, h is the Hubble constant in units of 100 km s−1 Mpc−1 and hσvi is the
thermally averaged interaction rate, where v is the relative velocity of the inter-
28
Chapter 3. Dark matter
acting WIMPs. The cross-section required to explain current observations, i.e.
hσvi ≈ 10−26 cm3 s−1 , is approximately the same as can be expected in elecα2
−26
troweak interactions, where typically σv ∼ M
cm3 s−1 for an assumed
2 ∼ 10
χ
WIMP mass, Mχ , of about 100 GeV, and this is often referred to as the “WIMP
miracle”. Here, α is the fine structure constant.
3.2
Dark matter candidates
There is a large number of proposed DM particle candidates. In order for a particle
to be a viable DM candidate, however, a number of requirements need to be met (see
e.g. [72]). A positive answer should be the result of all of the following questions:
1. Does it match the appropriate relic density?
2. Is it cold?
3. Is it neutral?
4. Is it consistent with the BBN?
5. Does it leave stellar evolution unchanged?
6. Is it compatible with constraints on self-interactions?
7. Is it consistent with direct DM searches?
8. Is it compatible with gamma-ray experiments?
9. Is it compatible with other astrophysical bounds?
10. Can it be probed experimentally?
The natural candidate for hot DM are Standard Model neutrinos. However,
observations of structure formation today disfavour that a large part of the DM is
in the form of hot DM. For example, hot DM models predict so-called top-down
formation of structure, i.e. that small structure were formed by the fragmentation
of larger ones, but observations show that galaxies are older than superclusters [73].
A small amount of hot DM is allowed as long as it is compatible with structure
formation and CMB data and the estimated abundance is Ων ∼ 0.001 − 0.1 [73].
One of the valid DM particle candidates is the axion [74, 75], which is a consequence of the Peccei-Quinn theory [76] that was proposed to resolve the “strong CP
problem”, i.e. why quantum chromodynamics does not seem to break the chargeparity (CP) symmetry. One of the properties of axions is the conversion to photons
in the presence of electromagnetic fields, which allows for an experimental signal to
be searched for.
Theories that invoke universal extra dimensions are also capable of producing
viable DM candidates, of which the most prominent is the first excitation of the
3.2. Dark matter candidates
29
hypercharge gauge boson (B 1 ), which is also known as the lightest Kaluza-Klein
particle [77].
A large theoretical framework that give several DM particle candidates is supersymmetry (often shortened to SUSY). For a general overview of supersymmetric
DM, see e.g. [71]. Supersymmetry is often an integral part of string theory and is
an attempt to give a unified description of fermions and bosons and to solve the
so-called hierarchy problem in particle physics. This refers to the fact that the
radiative corrections to the mass of the Higgs boson are enormous while the mass
itself is constrained by quantum field theory to be light for the electroweak theory
to work.
In supersymmetry, every particle and gauge field has a superpartner. The gauge
fields given by gluons (g) and the W ± and B bosons have associated fermionic suf i ) and binos (B),
e respectively, and fermions
perpartners called gluinos (e
g ), winos (W
have associated scalar partners (quarks become squarks and leptons become sleptons). An additional Higgs field is also introduced. What follows from supersymmetry is that for every boson loop correction there is a fermion loop correction that
cancels it, which in turn would help alleviate the hierarchy problem.
In the simplest models of supersymmetry, there is a multiplicative quantum
number called R-parity, which is conserved. This was originally imposed in order
to suppress the rate of proton decay and is defined as Eq. 3.2 [55],
R ≡ (−1)3B+L+2s ,
(3.2)
where B is the baryon number, L is the lepton number and s is the spin. All
Standard Model particles have R = 1 and all superpartners, or sparticles, have
R = −1. This means that the decay products of sparticles must consist of an odd
number of sparticles. A consequence of this is that the Lightest Supersymmetric
Particle (LSP) is stable and can only be destroyed through pair annihilation.
The theoretically favoured non-baryonic DM particle candidate and also the
most widely searched for experimentally today is the LSP, which is often assumed
to be the neutralino. In the Minimal Supersymmetric Standard Model (MSSM),
the neutralino is a mix of the bino, wino and higgsino states. The mix gives the
e01 is the lightest one,
e04 , where χ
e03 and χ
e02 , χ
neutralino four mass eigenstates, χ
e01 , χ
usually denoted by only χ.
In neutralino pair-annihilation, the leading channels at low neutralino velocities
are annihilations into fermion-antifermion pairs, gauge boson pairs and final states
with Higgs bosons. These can eventually through different decay chains produce
neutrinos, charged particles and finally neutral pions that decay into gamma-rays.
These gamma-rays could then be observed by Fermi Gamma-ray Space Telescope
as a continuum spectrum. In this model, no tree-level Feynman diagrams exist for
the pair annihilation of neutralinos directly into two gamma-rays. The annihilation
into that particular final state must therefore proceed through loops which results
in a significant suppression of the annihilation rate [78].
30
Chapter 3. Dark matter
The neutralino is, however, not the only valid DM candidate in supersymmetric
theory. Other candidates include e.g. the gravitino and the axino. For more
detailed reviews of the possible DM candidates see e.g. [72, 79].
3.3
Dark matter properties
Signals from DM in the gamma-ray region can be categorised into continuum signals and spectral line signals and are produced through the processes explained in
Section 2.1.
Continuum signals are excesses in the overall spectrum that can not be accounted for by the existing components, such as the diffuse galactic emission or
the isotropic diffuse emission. This kind of search is limited by the precision to
which the existing components can be described, unless the search is conducted in
a region where the contribution from the known components is small. An example
of such a region would be DM-dominated subhalos at high galactic latitudes, where
the galactic diffuse emission is small.
Many of the viable DM candidates are able to produce spectral lines via annihilation or decay channels directly into two monochromatic gamma-rays. If the DM
particles (χ) are non-relativistic and annihilate, the energy of each photon will be
Eγ = Mχ . For decays, the corresponding energy is instead Eγ = Mχ /2.
A spectral line can also be produced if the annihilation of the DM particles
creates one photon and some other particle (X). The other particle can e.g. be
a Z-boson, a Higgs boson, a neutrino or a non-Standard Model particle. In that
case, the energy of the photon is determined by the mass of the DM particle and
the mass of the other particle according to Eq. 3.3 [63].
Eγ = M χ
M2
1 − X2
4Mχ
(3.3)
The corresponding equation for decays is given by the substitution Mχ → Mχ /2
in Eq. 3.3.
Models predicting spectral line signals can be created in a variety of theoretical frameworks. These include e.g. neutralino annihilations [71], wino annihilations [80], inert Higgs annihilations [81], Kaluza-Klein annihilations assuming
universal extra dimensions [82, 83], the Green-Schwarz mechanism [84], gravitino
decays [85], hidden vector DM decay [86] and Dirac fermion DM annihilations into
Higgs particle final states [87]. The predictions include between one and three spectral lines, depending on the model, and in some cases the final states producing
them constitute the leading channels.
An observation of a spectral line would be a “smoking-gun” for DM, since no
other astrophysical process should be able to produce it. However, several models predict either low branching fractions or low cross-sections for those particular
channels, so a halo with a large central concentration, the existence of substructure
3.4. Halo models
31
that would boost the signal especially in spatial regions with low background emission, or the Sommerfeld enhancement [88] might be needed in order to see such a
signal.
For monochromatic gamma-rays, the relation between the flux (Φ) and the
annihilation cross-section (σ) is given by Eq. 3.4,
Φ=
Nγ hσviγX
L,
8π Mχ2
(3.4)
where Nγ = 2 for X = γ, Mχ is the DM particle mass, v is the average velocity of
the DM particles and L is the line-of-sight integral which is given by Eq. 3.5.
Z
Z
Z
L = db dl ds cos b ρ2 (~r)
(3.5)
Here, b and l correspond to the galactic latitude and longitude respectively, ρχ
1/2
2
is the DM density and r = s2 + R⊙
− 2sR⊙ cos l cos b
in which R⊙ = 8.5 kpc
corresponds to the approximate distance from the galactic centre to the solar system.
The corresponding equations for the decay lifetime (τγX ) are derived by performing the substitution hσvi /2Mχ2 → 1/τ Mχ in Eq. 3.4 and ρ2 → ρ in Eq. 3.5.
The flux as expressed by Eq. 3.4 can be reformulated to Eq. 3.6 [89],
−11
Φ(ψ, ∆Ω) = 0.94 × 10
Nγ vσγγ
10−29 cm3 s−1
10 GeV
Mχ
2
hJ(ψ)i∆Ω × ∆Ω cm−2 s−1 sr−1 ,
(3.6)
where ∆Ω is the solid angle and the dimensionless line-of-sight-dependent function
J(ψ) is given by:
1
J(ψ) =
R⊙
1
ρ (R⊙ )
2 Z
line−of −sight
ρ2χ (l)dl(ψ)
This is averaged over the solid angle according to:
Z
1
dΩ′ J(ψ ′ )
hJ(ψ)i∆Ω =
∆Ω ∆Ω
(3.7)
(3.8)
The assumed value of the local DM density (ρ (R⊙ )) is currently under debate. In DarkSUSY, a publicly available advanced numerical package for DM calculations [89], a value of 0.3 GeV cm−3 is assumed. However, later studies have
indicated that 0.4 GeV cm−3 may be closer to the correct value [90].
3.4
Halo models
The DM distribution on small scales, i.e. on galactic and sub-galactic scales, is
still under debate and plays a crucial role for the detection of DM signals. To
32
Chapter 3. Dark matter
describe most of the observed rotation curves of galaxies, a phenomenological halo
density profile, based on state-of-the-art N-body simulations, is generally used. This
smooth and spherically symmetric profile is given by Eq. 3.9,
ρ(r) =
δc ρc
(r/rs )γ [1 + (r/rs )α ]
(β−γ)/α
,
(3.9)
where r is the angular radius from the galactic centre, rs is a scale radius and δc is
a characteristic dimensionless density, and ρc = 3H 2 /8πG is the critical density for
closure. There are a number of widely used halo profiles that differ in the values
of the (α,β,γ) parameters. The more popular profiles are the Navarro, Frenk and
White (NFW) model with (1,3,1) [91], the isothermal profile with (2,2,0) [92], the
Moore model with (1.5,3,1.5) [93] and the Kravtsov model with (2,3,0.4) [94].
Another observationally favoured halo profile is the Einasto profile [95, 96],
which is given in Eq. 3.10,
ρEinasto (r) = ρs e−(2/a)[(r/rs )
a
−1]
,
(3.10)
where ρs is the core density and a is a shape parameter.
The dark matter halo profiles can be referred to as cored, cuspy or spiked,
depending on whether the central density is proportional to r−γ with γ ≈ 0, γ & 0
or γ & 1.5, respectively.
3.5
Detection techniques
There are currently two major ways in which a particle detection of DM is pursued.
The first, direct detection, is based on measuring the recoil energy of nuclei when
DM particles, generally assumed to be WIMPs, scatter off them. Due to the low energy of the recoils, the experiments must be shielded and placed deep underground
to protect the detectors from unwanted background. The DAMA/LIBRA [97] and
CDMS[98] experiments are two examples of collaborations active in this type of
search.
In the second detection technique, indirect detection, the DM particles themselves are not observed but rather the effects they give rise to or the secondary
particles they create. This technique can be further categorised into two different
approaches.
The first approach involves detecting the secondary particles from annihilating
or decaying DM that is gravitationally bound to other astrophysical objects or to
itself. This type of search is exercised in a wide variety of experiments and for many
assumed final state particles.
For gamma-rays, DM searches can be performed with ground-based air-shower
experiments such as H.E.S.S., MAGIC, VERITAS, Milagro and HAWK, which were
already mentioned in Section 2.3 and space-based gamma-ray satellites such as the
Fermi Gamma-ray Space Telescope.
3.6. Experimental status
33
For neutrinos, many neutrino telescopes are involved in DM searches. These
include AMANDA/IceCube in the Antarctic [99, 100, 101] and ANTARES [102,
103] in the Mediterranean, which attempt to detect the neutrinos produced in the
annihilation of DM particles that may be gravitationally bound to the Earth or the
Sun.
Finally, DM searches are also conducted using experiments capable of measuring
electrons, positrons, protons and antiprotons such as the space-based PAMELA
experiment [104]. Inference about the existence of DM can be made from e.g. the
energy-dependent ratio of different particle types.
In the second approach, there is a possibility that DM can be artificially created
in the energetic collisions produced at large accelerators. The search for DM can
in that case be directed towards identifying processes where an imbalance in the
measured momentum can be seen. This “missing energy” can be the result of a
DM particle that is produced in the collision but escapes out of the detector. This
approach will be utilised in the detectors located in the Large Hadron Collider
at CERN, where protons will eventually be collided at a centre-of-mass energy of
about 14 TeV.
3.6
Experimental status
Currently, new experimental results for DM are being published at a high pace,
making the field very dynamic. Though most results have presented non-detections
in the form of upper limits, also unexpected features have been seen. The most
recent developments of this kind are therefore briefly reviewed here.
In the field of direct detection, the DAMA/LIBRA collaboration has claimed
detection of an annual modulation believed to be caused by the Earth’s movement
relative to a WIMP halo [105]. The results are, however, still controversial at this
point since no other experiment has been able to observe a signal of that kind.
Another direct detection experiment, CDMS-II, has reported 2 signal events in
their specified signal region [106]. However, the probability of having two or more
background events in the signal region is stated to be 23%, which means that the
detection is not very significant.
As for charged particles, the fraction of positrons was recently measured by the
PAMELA experiment and an unexpected excess in the fraction was found in the
high end of the energy range [107]. One of the possible explanations to the observed
excess include DM [108, 109, 80]. This measurement has later been complemented
by measurements of the combined electron and positron spectrum by the balloonborne ATIC experiment [110] and the Fermi Large Area Telescope [111], however,
the excess reported by the former is not confirmed by the latter.
The measurements can, however, currently not be reasonably well fit by conventional galactic propagation models or by models assuming a continuous distribution
of sources. Solutions explaining both PAMELA and Fermi Large Area Telescope
34
Chapter 3. Dark matter
data without invoking DM have, however, been proposed and include nearby pulsars
and source stochasticity [112] as well as secondary acceleration in the sources [113].
Since many of the results above are unexpected, the interest in the community
for independent checks by other instruments has increased. One of the more anticipated experiments that is awaiting launch is AMS-02 [114], which is an instrument
similar in design to the PAMELA instrument but with a larger sensitivity and performance. It is at the time of writing scheduled to be launched with one of the final
space shuttle missions and will make precision measurements of the cosmic-ray sky
from the International Space Station.
In the Fermi -LAT Collaboration, a variety of DM searches have been performed.
The currently published results include searches in clusters of galaxies [115], dwarf
spheroidal galaxies [116, 117] and searches for cosmological DM in the isotropic
diffuse emission [118] and spectral lines [119] (see also Section 7.6). Overall, these
studies are beginning to probe the available and theoretically interesting parameter
spaces, but a detection has not been made so far. There are also on-going efforts to
study the galactic centre in more detail and to search for substructures consisting
of only DM.
In conclusion, the identification of DM is still an open question despite the large
variety of studies that have already been published. However, the field is very active
and new experiments designed to probe the available phase space are planned.
Chapter 4
Fermi Gamma-ray Space
Telescope
The current generation in gamma-ray satellites, the Fermi Gamma-ray Space Telescope (henceforth denoted Fermi ), was successfully launched on a Delta II heavy
launch vehicle from Cape Canaveral in Florida, USA, on 11 June, 2008. The satellite was formerly known as the Gamma-ray Large Area Space Telescope (GLAST)
but was renamed after its launch. An artist’s conception of the satellite can be
seen in Fig. 4.1. The satellite consists of two detector systems, the Large Area
Telescope (LAT) and the Gamma-ray Burst Monitor (GBM). This chapter reviews
the scientific goals of the Fermi mission and describes the different instruments and
subsystems.
Figure 4.1. An artist’s impression of the Fermi Gamma-ray Space Telescope. The
box-like structure on the top is the LAT and the yellow detectors on the sides are
part of the GBM.
The satellite orbits the Earth at an altitude of about 565 km and with an
inclination angle of about 25.6◦ . One orbit takes about 90 minutes and full-sky
coverage is reached in only two orbits. The data acquisitions start and end at the
borders of the South Atlantic Anomaly (SAA). The reason for this is that the high
35
36
Chapter 4. Fermi Gamma-ray Space Telescope
concentration of charged particles within the SAA can damage the electronics of the
instruments. Therefore, the high voltages, powering the satellite and its detectors,
must be lowered to a minimum level inside the SAA. If no part of the SAA is
traversed during the orbit, the data acquisition start and end at the ascending
node, i.e. where the orbit crosses the equator.
In Fig. 4.2, a visualisation of the orbits can be seen. The borders of the SAA
and the angle of inclination, presented in the figure, represent pre-launch estimates.
The borders were determined more exactly during the first phase of the mission (see
also Section 4.2.5).
Figure 4.2. A visualisation of the Fermi orbit. The blue trails represent the orbits
of Fermi and the yellow lines mark the borders of the South Atlantic Anomaly
(SAA). Data acquisitions start and end at the borders of the SAA. If no part of
the SAA is present in the orbit, the data acquisitions start and end at an ascending
node. The shown borders and inclination angle are pre-launch estimates.
4.1
Scientific goals
The scientific goals of Fermi are largely motivated by results from the predecessor
EGRET, which measured gamma-rays with energies between around 20 MeV and
30 GeV, and ground-based atmospheric Cherenkov telescope arrays, which measure
energies above several tens of GeV. The main scientific goals of Fermi are to:
• Resolve the gamma-ray sky. This includes studying the nature of the 170
unidentified EGRET sources, the extragalactic diffuse emission and the origins of the emission from the Milky Way, the nearby galaxies and galaxy
clusters.
• Understand the particle acceleration mechanisms in celestial sources, such as
AGNs, blazars, pulsars, pulsar wind nebulae, supernova remnants and the
Sun.
4.2. Large Area Telescope
37
• Study the high-energy processes in GRBs and transients. GRBs (see also
Section 2.2) have been studied in many different wavelength regions, including
X-ray, optical and radio. The behaviour at gamma-ray energies is, however,
largely unknown.
• Probe the nature of dark matter. As described in Chapter 3, many models of
dark matter can be investigated with the Fermi -LAT instrument.
• Investigate the early Universe to z≥6 using high-energy gamma-rays. The
era of galaxy formation can be studied with photons above 10 GeV via the
absorption by pair production of accumulated radiation from structure and
star formation (extragalactic background light) and of gamma-rays from e.g.
blazars.
4.2
Large Area Telescope
The LAT, seen in Fig. 4.3, covers the approximate energy range from 20 MeV to
more than 300 GeV and was built by an international collaboration consisting of
space agencies, physics institutes and universities from France, Italy, Japan, Sweden
and the United States.
Figure 4.3. The Large Area Telescope in cross-section. Each module has a tracker
module and a calorimeter module. The tiles on the sides are part of the anticoincidence detector shield.
The instrument is a pair-conversion telescope, designed to measure the electromagnetic showers of incident gamma-rays over a wide field-of-view while rejecting
incident charged particles with an efficiency of 1 to 106 . It consists of a 4 x 4
array of 16 identical modules on a low-mass structure. Each of the modules has a
gamma-ray converter tracker for determining the direction of the incoming gammaray and a calorimeter for measuring its energy. The tracker array is surrounded by
a segmented anti-coincidence detector. In addition, the whole LAT is shielded by
a thermal-blanket micro-meteoroid shield.
38
Chapter 4. Fermi Gamma-ray Space Telescope
The data taking is governed by a programmable trigger that can utilise prompt
signals from all the subsystems. The downlink capacity from the LAT to the ground
is limited, so the data acquisition hardware reduces the rate of events to about
1 Mbps using onboard event processing.
4.2.1
Tracker
The active detector elements in the directional tracker (TKR) modules are SiliconStrip Detectors (SSDs). Each TKR module in the LAT has a width of 37.3 cm and a
height of 66 cm, where the width was optimised to utilise the longest silicon strips
possible while keeping a good noise performance, high efficiency and low power,
and the height was a trade-off between having a large enough lever arm between
successive hits in the TKR and keeping a low LAT aspect ratio that maximises the
field-of-view. For an extensive review of the TKR system, see e.g. [120].
A TKR module consists of a stack of 19 trays, which support the SSDs, the
associated readout electronics and tungsten converter foils, where pair production
is induced. Only the topmost 16 layer pairs are preceded by a tungsten plane, just
above the detector planes. There are 576 SSDs in each TKR module and they are
arranged into 18 pairs of x and y planes, and the x and y planes are separated by
a gap of 2 mm. In total, each SSD detector plane has 1526 strips with a pitch of
0.228 mm.
The close proximity of the tungsten planes to the active detectors is crucial
in order to minimise the effects of multiple scattering of the charged particles in
the shower. Multiple scattering can significantly degrade the angular resolution.
Therefore, for lower energies, most of the directional information comes from the
first two points of the track. At higher energies, however, the effects of multiple
scattering are negligible and the angular resolution is limited mainly by the strippitch and the gap between the silicon detector planes. The total weight of the
tungsten in each module is 9 kg and converts about 63% of the gamma-rays at
normal incidence above 1 GeV. A sketch of the layer-wise setup and a gamma-ray
conversion is shown in Fig. 4.4.
The TKR is designed with both thin and thick tungsten converter layers, in
order to reach the required performance at both ends of the energy range. Each
of the first twelve planes of tungsten is 0.027X0 (0.095 mm) in thickness, while
each of the final four is 0.18X0 (0.72 mm) in thickness. The concept of radiation
lengths, X0 , is defined in Section 1.3. The two regions, with thin converter layers
in the front of the TKR and thick converter layers in the back of the TKR, have
intrinsically different performances as will be shown later in Section 4.2.7.
For a single plane of silicon in the TKR, the efficiency to detect a minimumionising particle at nearly normal incidence with respect to the active area is
> 99.4%. The noise occupancy, i.e. the probability for a single channel to have
a noise hit in a given detector trigger is, after masking of noisy channels (0.06% of
the channels), less than 5 × 10−7 .
4.2. Large Area Telescope
39
Figure 4.4. A gamma-ray pair-producing in a tracker module. The two directional
planes of SSDs are preceded by a conversion plane of tungsten.
4.2.2
Calorimeter
A Fermi -LAT calorimeter (CAL) module consists of 96 CsI(Tl) Detector Elements
(CDEs), i.e. 12 CDEs per layer in 8 layers, supported by a carbon composite cell
structure. The LAT therefore includes a total of 1536 CDEs, which gives the CAL
a combined weight of 1376 kg. At normal incidence, the CAL corresponds to 8.6X0 .
The segmentation of the CAL has many advantages and helps e.g. to distinguish
between showers produced by gamma-rays and those by charged particles but it also
helps to constrain the incoming direction of the gamma-ray. In addition, it improves
the energy measurement by allowing cascade profile fitting to be performed, which
compensates somewhat for leakage into gaps and out of the back of the CAL.
The design is hodoscopic, as can be seen in Fig. 4.5, i.e. the crystal directions
in odd layers are orthogonal to the crystal directions in even layers. The size of
each crystal is 326 × 26.7 × 19.9 mm3 , where the widths correspond to roughly
one radiation length in CsI(Tl), i.e. 18.6 mm [121]. Two out of the four long-side
surfaces have been roughened to give a known attenuation with a better uniformity
in the light collection along the crystal. To improve light collection and optical
isolation, the crystals are individually wrapped with a reflective material called
VM 2000.
The scintillation light from the crystals are collected at each end of each crystal
using two silicon PIN photodiodes, which have a spectral response that is matched
to the scintillation spectrum from CsI(Tl). The diodes are of different size to be
able to cover the large energy range of the LAT. The larger diode has an active
40
Chapter 4. Fermi Gamma-ray Space Telescope
Figure 4.5. A picture showing the hodoscopic design of a Fermi-LAT calorimeter
module.
area of 1.5 cm2 , which is a factor of 6 larger than the active area of the smaller
diode (0.25 cm2 ). The larger diode is designed to measure smaller energy deposits,
from 2 MeV to 1.6 GeV, whereas the smaller diode handles larger energy deposits,
from 15 MeV to 100 GeV.
4.2.3
Anti-Coincidence Detector
The anti-coincidence detector (ACD) on the LAT consists of 89 Tile Detector Assemblies (TDAs) made of plastic scintillator material. The layout is sketched in
Fig. 4.6.
The scintillator tiles are 10 mm thick, except for the central row on top of the
LAT, which is 12 mm, and range in size from 15 × 32 cm2 to 32 × 32 cm2 depending
of the location of the TDA. An example of an unwrapped tile is shown in Fig. 4.7.
In the bottom row of each of the four sides of the LAT is a long tile, 17 × 170 cm2 .
These tiles lie outside of the primary field-of-view of the LAT, where no events will
be accepted as gamma-rays. Each tile is, furthermore, wrapped with two layers of
high reflectance white Tetratec followed by two layers of light-tight black Tedlar.
Each TDA is connected to 1 mm in diameter wavelength shifting (WLS) fibers,
which transmit the scintillation light to photo-multiplier tubes (PMTs), which are
located on the sides of the LAT, below the TDAs. For redundancy, each tile is read
out by two PMTs.
The tiles are overlapping in one dimension, as shown in Fig. 4.8, to minimise
the open areas. The remaining gaps in the other direction, typically 2–3 mm, are
unavoidable due to the wrapping material and since the tiles must be allowed to
thermally expand and vibrate during launch. To detect entering charged particles,
the gaps are instead covered with flexible scintillating fiber ribbons.
The so-called “crown” tiles, i.e. the top most rows of tiles on the four sides of the
LAT, also seen in Fig. 4.6, are extended above the tiles on the top of the LAT. The
reason for this is to minimise the irreducible background caused by protons that hit
the Micro-Meteoroid Shield (MMS) at a shallow angle, which produce gamma-rays
4.2. Large Area Telescope
41
Figure 4.6. The layout of the tile detector assemblies in the anti-coincidence detector of the LAT (a) and the electronics assembly with the PMTs above the LAT
grid (b) (from [122]).
Figure 4.7. A picture of an unwrapped anti-coincidence detector tile (from [122]).
that enter the detector. According to simulations, the contamination from this type
of events would without the crown tiles be significantly higher.
The MMS surrounds the whole ACD, to shield it from micro-meteoroids and
space debris, and consists of four layers of Nextel ceramic fabric separated by four
layers of 6 mm thick Solimide low-density foam, which is backed by 68 layers of
Kevlar fabric. According to calculations, there is a 95% probability of allowing no
more than 1 penetration of the MMS in 5 years.
The ACD has a segmented design for mainly two reasons. Firstly, it is utilised
to avoid events with backsplash from being vetoed. Backsplash is a process, where
charged particles, produced in the electromagnetic showers from gamma-rays in
42
Chapter 4. Fermi Gamma-ray Space Telescope
Figure 4.8. A sketch of the overlapping anti-coincidence detector tiles on top of
the LAT (a) and in cross-section (b) (from [122]).
the field-of-view, propagate the detector in the opposite direction of the incident
gamma-rays and hit the ACD tiles. A simulation showing the chain of events
can be seen in Fig. 4.9. In experiments, such as EGRET, this process caused a
significant decrease in effective area and additional dead time due to the single
piece design of the anti-coincidence shield. The effects of backsplash have been
thoroughly investigated and taken into consideration when designing the ACD.
Figure 4.9. A simulation of backsplash in the LAT that hits the anti-coincidence
detector (from [122]).
The second reason for having a segmented design is for usage in the background
rejection, further discussed below.
A significant difference between EGRET and the Fermi -LAT is the lack of a
directional trigger system, i.e. time-of-flight detectors, in the latter. Instead, the
principal trigger on the Fermi -LAT requires coinciding signals in three consecutive
tracker xy layer-pairs. The particles can, thus, come from any direction, which
results in a very high first-level trigger rate that can be up to 10 kHz. The triggers
4.2. Large Area Telescope
43
are mostly caused by charged particles, which as mentioned before outnumber the
gamma-rays by six orders of magnitude.
To reduce the large amounts of data produced by these triggers to the downlink
capacity to the ground, most of the triggers that are induced by charged particles
must be rejected already in space by using an onboard filter. The responsibility of
identifying the charged particles for rejection lies mainly on the ACD.
The science requirements for the LAT state that the residual background should
be no more than 10% of the diffuse gamma-ray background intensity. To meet this
goal, protons must be suppressed by a factor of about 106 and electrons by a factor
of about 104 [122]. On-orbit investigations of the residual contamination have,
however, shown a higher level of contamination than predicted from pre-launch
modeling [23]. Therefore, the event selection is currently being revised in order to
reduce the contamination to acceptable levels.
The CAL and TKR can be used to suppress protons by at least a factor of 103
by using event patterns in the TKR and shower shapes in the CAL. The remaining
factor of 103 must be provided by the ACD.
For electrons, the required rejection is tightest for lower energies due to the
steep decrease with energy of the electron spectrum. The TKR can be used to
suppress electrons by a factor of 10, by identifying tracks that point to inefficient
regions of the ACD. The remaining suppression of 0.9997 for the ACD is done in
several different ways.
Firstly, the tracks, as measured by the TKR, that point back to “shadowing”
ACD tiles that recorded a signal are vetoed. Secondly, the information from the
TKR and CAL in combination with the ACD are compared, to reject events that
have a high probability of being charged particles. This reduces the amount of data
to the capacity of the downlink to the ground. Transmitting a very small fraction
of charged particle events to the ground is, however, useful for calibration purposes.
Therefore, the efficiency of the ACD to reject charged particles must at this point
only be ≥ 0.99. Finally, the required efficiency of 0.9997 for the ACD to detect
singly charged particles is accomplished via data analysis off-line on the ground and
includes e.g. pulse height analysis.
The ultimate background rejection that meets the science requirements for the
LAT is then achieved via further data analysis on the ground.
For a more thorough review of the ACD design and specifications, see [122].
4.2.4
Event reconstruction
In the TKR, the event reconstruction is based on a Kalman filter [123, 124], which
alleviate the problems in track fitting and pattern recognition caused by nonnegligible multiple Coulomb-scattering. The Kalman filter is iterative and allows
for the consideration of one measurement at a time. These are then added independently to the fit. With the filter, random errors can be handled in a natural
way and the problem reduces to the multiple scattering error produced between
two consecutive measurement planes.
44
Chapter 4. Fermi Gamma-ray Space Telescope
In the TKR, the strips that were hit are first converted into positions. If two
adjacent strips have been hit, they are merged to form a single hit cluster. In the
second step, candidate tracks are formed based on the reconstructed clusters and
individual track energies aid in the track recognition. Then, the three-dimensional
position and direction are determined and their errors are estimated. Finally, the
fitted tracks are used to determine the vertex where the pair-production took place.
The iteration occurs when the tracks are used to get an improved energy estimate
after which the tracks can be refit and a new vertex position can be found.
In the CAL, the energy is determined by first applying pedestals (voltage offsets at the analog-to-digital converter inputs) and gains to the raw digitised signals
from each crystal. The energy is then the sum of all depositions above a certain
threshold in all crystals. The longitudinal position in each crystal, where the energy
deposition took place, is calculated from the known relation between this position
and the energy measured in the two crystal ends. If the light attenuation in the
crystal is strictly exponential, this relation takes the mathematical form of Eq. 4.1,
x = K · tanh−1 A
(4.1)
where A = (Lef t − Right)/(Lef t + Right) is the so-called light asymmetry, Lef t
and Right are the measured energies in the two crystal ends, respectively, x is the
longitudinal position of impact in the crystal and K is a constant factor.
With the calculated array of energy depositions and positions for each crystal
in the CAL, the direction can be determined via energy moments analysis for each
shower. This procedure is similar to a moments of inertia analysis, in which a
moments ellipsoid is calculated. The principal axis of this ellipsoid provides the
incoming direction of the particle and the centre of the ellipsoid determines the
centre of gravity for the shower.
The raw energy measured by the CAL after diode calibrations is stored in a
variable called CalEnergyRaw. It is never the same as the initial photon energy,
since energy is always lost in internal gaps between CAL modules, between crystals
and via leakages out of the back and the sides of the instrument. Therefore, different
energy correction algorithms have been developed to compensate for these effects
and to correct the energy to correspond to the incoming photon energy. There are
currently three such algorithms:
• The parametric method combines energy measurements from both the TKR
and the CAL. At 100 MeV, about 50% of the total energy deposition is
deposited in the TKR. The fraction decreases rapidly with energy and is only
about 5% at 1 GeV. Energy estimations in the TKR can be done in many
ways. There is e.g. a correlation between the number of SSD hits near or on
the track and the particle energy, due to the large amount of multiple Coulumb
scattering at lower energies. The opening angle for pair-produced e+ e− is also
an energy estimator. The parametric method provides a starting point for
the re-fitting of the track in the TKR and the energy is re-evaluated using
the two other methods below. The corrected energies from the parametric
4.2. Large Area Telescope
45
method are stored in a variable called EvtEnergyCorr and is included in one
of the ROOT N-tuples used for analysing detector information.
• The likelihood method makes use of the correlation between the number of hits
in the TKR, the energy deposited in the last layer of the CAL and the total
raw energy deposited in the CAL. It has been optimised for various classes
of simulated events, such as gamma-ray conversions occurring in the thin or
thick tungsten layers of the TKR. The corrected energy using the likelihood
method is stored in a variable called CalLkHdEnergy.
• The profile fitting method looks at the layer-by-layer energy deposition and fits
the longitudinal shower development while also taking into account the transverse size of the shower. The method works best for photons energies above 1
GeV, when the shower maximum is contained within the CAL. The corrected
energy using the profile method is stored in a variable called CalCfpEnergy.
The final output energy, representing the incoming photon energy, is a composition of the three corrected energies described above. The selection of energy
method is based on a classification-tree analysis and the output is stored in the
variable CTBBestEnergy.
4.2.5
On-orbit calibration
The Fermi -LAT instrument was calibrated before launch by using ground-level
cosmic-ray muons for the low-energy scale and charge injection into the front-end
electronics for each detector for the high-energy scale. These calibrations were only
approximations of the optimal trigger and timing settings, so they were re-done
on-orbit by using known astrophysical sources, galactic cosmic rays and chargeinjection [125].
The full calibration, first of all, involves the synchronisation of the trigger signals
from the different detector components and adjusting the delays for data acquisition, in order to maximise the trigger efficiency. The relative timing of the ACD
and the TKR is optimised using proton candidates, since the main purpose of the
ACD is to reject charged particles, whereas the relative timing between the TKR
and the CAL is set using photon candidates.
The detector components are also individually calibrated. For the ACD, this
includes determining the mean values of pedestals, the signal pulse heights produced
by minimum-ionising particles in each ACD scintillator, veto threshold settings as
well as high-energy and coherent-noise calibrations for each PMT.
For the TKR, the calibrations concern the determination of noisy channels that
may affect the instrumental dead time and the data volume. Such channels have
to be masked to maintain optimal performance. Furthermore, trigger and data
latching (the process of reading out data from all subdetectors) thresholds are set
in order to minimise the noise occupancy while maximising the hit efficiency. The
46
Chapter 4. Fermi Gamma-ray Space Telescope
calibration of the TKR also includes the determination of the conversion parameters from charge time-over-threshold (ns) to charge deposit (fC) and the absolute
calibration of the charge injection digital-to-analog converter.
For the CAL, calibrations are conducted for each crystal in terms of pedestal
values, light asymmetries and threshold settings. Also, the individual energy scales
of the crystals are calibrated using cosmic rays. Protons are used for the low-energy
scales whereas the high energies are calibrated via carbon nuclei, protons and other
galactic cosmic rays. Galactic cosmic-ray heavy nuclei, from carbon to iron, are
also used to independently monitor the energy scale at high energies.
In addition to these calibrations, the borders of the SAA are re-defined from prelaunch settings using diagnostic data in the form of fast trigger signals that remain
operational even inside the SAA. Alignment procedures have to be performed as
well, since the accuracy of reconstructed direction measurements depends on the
knowledge of the exact positions of each detector element. This includes intratower alignment (position and orientation of each detector element), inter-tower
alignment (position and orientation of each tower) and spacecraft alignment (the
rotation of the LAT with respect to the Fermi onboard guidance, navigation and
control system). For the latter, gamma-rays near bright identified celestial pointsources of gamma-ray emission are used.
On a final note, there has been only minor changes in the calibration constants
since launch and they have remained stable during the first 8 months of operations.
The calibrations are, however, updated on a regular basis in order to keep the
instrument at optimal efficiency.
4.2.6
Data structure
The data that is finally used in an analysis depends on the version of the background
rejection algorithm that is used. The background rejection algorithm is based
on a classification-tree analysis as well as detector variable cuts and affects the
performance of the detector. The versions are generally referred to as “passes” and
the version which is used in most analysis so far is Pass6V3, where V3 stands for
version 3. The version that is currently under development is Pass7.
The raw data from the Fermi -LAT undergoes a chain of different data processing
steps to arrive at what can be categorised into three parts: low-level data, high-level
data and spacecraft data.
The low-level data is saved in ROOT trees (called N-tuples [126] and contains
variables with information from the different detectors. The low-level N-tuples
used in the analysis presented in this thesis are called the SVAC-tuple (Science
Verification Analysis and Calibration) and CAL-tuple (only variables related to
the calorimeter).
The high-level data contains reconstructed and calibrated variables and are
stored in both a Merit-tuple (ROOT file) and in a so-called FT1-file (in fits format).
The FT1-file is a subset of the Merit-file, obtained by performing a set of cuts on
4.2. Large Area Telescope
47
some of the variables in the Merit-tuple, and contains only the variables needed for
a standard science analysis, e.g. arrival times, directions and energies.
The spacecraft file is generally referred to as the FT2-file (also in fits format)
and consists of information regarding the satellite itself such as position, orientation
and live time data. This information is e.g. required when calculating the exposure
for an observed region.
The events in the FT1-file are organised into different event classes corresponding to different sets of cuts on the Merit level. The cuts further depend on the
version of the background rejection algorithm. The standard event class currently
recommended for high level analysis is called the diffuse class.
4.2.7
Performance
Many improvements in performance with respect to the predecessor EGRET (described in Section 2.4) have been made. This was accomplished partly by increasing
the size and partly by using moden particle detection technology. An additional
difference is that none of the subsystems in Fermi rely on consumables.
The performance, or instrument response functions (IRFs), of the LAT after
background rejection can be quantified in many ways but three common concepts
are:
• Point-spread-function (PSF), or the angular resolution, which measures how
accurately a given incident direction can be measure by the detector. It can
be quantified as the angular separation from the true angle below which 68%
and 95% of the reconstructed angles are located for en ensemble of events.
• Effective area, which can be conceptually defined as the surface area of a
detector perpendicular to an incident particle if the detection efficiency is
100%. It can be calculated as the ratio between the rate of detected events
(s−1 ) and the incident flux (cm−2 s−1 ).
• Energy resolution, which is a measure of how accurately a specific energy
is measured by the detector. It can be quantified as the separation in energy from the true energy below which 68% of the reconstructed energies are
located for an ensemble of events divided by the true energy.
The PSF, effective area and energy resolution all depend on the energy and the
incident angle as can be seen in Figs. 4.10, 4.11 and 4.12, respectively. The figures
have been produced using full-detector simulations where the true values of the
direction and the energy are known.
A summary of the performances, including the three concepts above, of the
Fermi -LAT and EGRET is given in Table 4.1.
48
Chapter 4. Fermi Gamma-ray Space Telescope
Figure 4.10. The point-spread-function as a function of energy and incident angle
for the Fermi-LAT (from [127]).
Figure 4.11. The effective area as a function of energy and incident angle for the
Fermi-LAT (from [127]).
Figure 4.12. The energy resolution as a function of energy and incident angle for
the Fermi-LAT (from [127]).
4.3. Gamma-ray Burst Monitor
49
Table 4.1. The performance and specifications of the Fermi-LAT compared to
EGRET. Courtesy: NASA.
Quantity
Energy range
Peak effective area1
Field-of-view
Angular resolution2
Energy resolution3
Dead time per event
Source location determination4
Point source sensitivity5
LAT (Minimum spec.)
20 MeV – 300 GeV
∼8000 cm2
∼2.4 sr
<3.5◦ (100 MeV)
<0.15◦ (>10 GeV)
<15%
<100 µs
<0.5’
<6 · 10−9 cm−2 s−1
1
After background rejection and for the LAT > 1 GeV
2
Single photon, 68% containment, on-axis
1σ, on-axis
1σ radius, flux 10−7 cm−2 s−1 (>100 MeV), high |b|
3
4
5
EGRET
20 MeV – 30 GeV
1500 cm2
0.5 sr
5.8◦ (100 MeV)
20–25%
100 ms
15’
∼10−7 cm−2 s−1
>100 MeV, at high |b|, for exposure of one-year all sky survey, photon spectral index -2
4.3
Gamma-ray Burst Monitor
The Gamma-ray Burst Monitor (GBM) is a set of burst detectors located on the
sides of the Fermi satellite, as can be seen in Fig. 4.1. As opposed to the LAT,
which has a field-of-view of about 2 sr, the GBM has an almost complete coverage
of the unocculted sky.
The LAT is, by itself, capable of detecting GRBs with a precision of about
10 arcmin. One of the most important features in the energy spectra of GRBs is
a break, where the spectrum changes from one power-law to another. This break
is located between 100 keV and 500 keV, which is well below the lower energy
threshold at 20 MeV of the LAT. To simultaneously measure low and high energy
contributions from the GRBs would therefore highly increase the scientific return.
The GBM includes 12 thin NaI(Tl)-plates, sensitive in the energy range between
around 10 keV and 1 MeV, and 2 BGO detectors covering the energy range from
150 keV to 30 MeV. The BGO detectors therefore provide an overlap in energy
with the LAT instrument and are mounted on opposite sides of the LAT to provide observations of almost the whole unocculted sky. The NaI(Tl) detectors are
arranged to give a large field-of-view of >8 sr. There are 6 NaI(Tl) detectors in
the equatorial plane, 4 at a 45◦ angle and 2 at a 20◦ angle (on opposite sides).
The location of a burst is given by the relative count rates of the different NaI(Tl)
detectors. A sketch of the detectors in the GBM can be seen in Fig. 4.13.
The GBM has three main tasks: to provide a GBM burst alert to be transmitted
to the ground, the location of the burst and finally the low-energy energy spectrum
and light curve of the burst.
50
Chapter 4. Fermi Gamma-ray Space Telescope
Figure 4.13. A sketch of the NaI(Tl) detectors (top) and the BGO detectors
(bottom) in the Gamma-ray Burst Monitor (from [128]).
Chapter 5
Calibration Unit beam test
In 2006, a beam test campaign was performed at CERN by the Fermi -LAT Collaboration. This chapter gives an introduction to the beam tests, including the
motivations to having them and the experimental setups.
5.1
Introduction
The Fermi -LAT instrument has been modelled with a Monte Carlo simulation package based on Geant4 [129, 130]. The simulation package has been developed by the
Fermi -LAT Collaboration and many properties of the detector, like the instrument
response function, which includes the effective area and the point-spread function
(see also Section 4.2.7), the background rejection and the energy reconstruction algorithms, have been determined or developed with this model. The accuracy of this
model is therefore crucial, which means that beam tests that measure the actual
response of the instrument in a controlled environment are of great importance.
In the beam tests, the physical processes in Geant4, including e.g. multiple
scattering and shower development, and the detector modelling, including the electronics, were tested. Geant4 only determines the energy loss in a given volume.
Therefore, the electronics have to be modelled independently and many quantities
derived from calibration procedures or from specifications have to be used.
The Fermi -LAT itself was not tested in a beam test due to the risks involved.
Therefore, the instrument could only be calibrated on ground using cosmic muons.
Since the muon energies are much lower than the upper end of the Fermi -LAT
energy range, the calibration had to be tested also at high energies. This was also
one of the motivations to do a beam test.
The large energy range and field-of-view of the Fermi -LAT yield a very large
total phase space and a continuous scan of the whole phase space in a beam test
is not feasible. The goals of the validation can, however, be met with a sampling
of the phase space. In the performed beam tests, the sampling meant tilting the
51
52
Chapter 5. Calibration Unit beam test
detector with respect to the beam axis by different angles. The tilted configurations
are useful when estimating the effects of gaps, inherent between towers and between
calorimeter crystals, and the accuracy of the geometry describing them.
Two tests were performed at CERN and one at GSI. The latter facility provided
testing of the response of the detector for heavy ions but will not be reviewed in this
thesis. The beam tests at CERN were performed at the Proton Synchrotron (PS)
facility, starting in July 2006, and in the Super Proton Synchrotron (SPS) facility
during September 2006. The PS facility provided photons via tagging (explained
in Section 5.3), electrons, protons and pions at energies 0.5–10 GeV, whereas the
SPS facility only provided electrons, protons and pions at 10–282 GeV.
5.2
Calibration Unit
As mentioned above, using the Fermi -LAT itself in a beam test was not feasible.
For this reason, the decision was made to build a new instrument using flight
spare modules and flight-like read-out electronics. This detector was named the
Calibration Unit (CU) and is shown in Fig. 5.1.
Figure 5.1. A photo of the Fermi-LAT Calibration Unit on top of a positioning
table.
Two full towers, with a TKR module and a CAL module, and an additional
CAL module were placed in a 1 × 4 support structure of aluminium. The detector
modules were enclosed in a protecting nitrogen-flushed, 2 mm thick aluminium
Inner Shipping Container (ISC). Five flight-like ACD tiles were also included outside
the ISC. The tiles were included to be able to study backsplash, a significant issue
in the EGRET experiment, where shower particles hit the ACD tiles and give rise
to a rejection of primary photons (see also Section 4.2.3). The tiles were placed
5.3. PS facility beam test
53
outside the ISC in order to be able to change tile configurations quickly during the
beam tests.
The CU was placed on a positioning table, capable of moving the CU along two
horizontal axes (x and y) and rotating the CU around the vertical axis (θ). The
positioning table can be seen in the Fig. 5.1. The black plates, seen in the middle
of the figure, contain the ACD tiles.
5.3
PS facility beam test
A beam of photons was not directly available at CERN. Therefore, one was created
in the T9 beam line at PS by deflecting electrons from an electron beam using a
magnet, thereby leaving mostly bremsstrahlung photons created in the detectors
upstream. This so-called photon tagger was a two-arm spectrometer and the detectors included are sketched in Fig. 5.2. A photograph of the test site is shown in
Fig. 5.3.
Figure 5.2. A sketch of the experimental setup at PS, showing the locations of
the Cherenkov detectors (C1 and C2 ), the plastic scintillators (S0, S1, S2, S4 and
Sh) and the Silicon-Strip Detectors (SSD1–SSD4 ) relative to the magnet, the beam
dump and the CU. The electrons are deflected with the magnet which leaves a beam
of bremsstrahlung photons created in the detectors upstream.
The first arm had two gas threshold Cherenkov-counters (C1 and C2 ) that were
used for particle identification, five plastic scintillators (S0, S1, S2, S4 and Sh) that
were used for monitoring, triggering and vetoing and Silicon-Strip Detector (SSD)
hodoscopes, SSD1 and SSD2, used for particle track measurements. S0 (with a
size of 15 × 40 × 1 cm3 ) provided monitoring of the total number of particles in
the beam and Sh (15 × 40 × 1 cm3 ), with a hole of 2.4 cm in diameter, was used to
reject particles in the “halo” of the beam. Both S1 and S2 had a small cross-section
and a thickness of 2 mm. They were used to select a small area of the beam.
After the first arm, a dipole magnet with a maximum bending power given by
50 cm × 1 T, deflected the electrons into the second arm of the spectrometer. In
the second arm, two additional SSD hodoscopes, SSD3 and SSD4, measured the
deflected electron direction. The final scintillator, S4 (10 × 10 × 1 cm3 ), defined
the acceptance of the spectrometer and was used for triggering.
54
Chapter 5. Calibration Unit beam test
Figure 5.3. A photo of the experimental setup at PS. The CU is located behind
the concrete blocks in the beam dump.
The bended track provided the energy of the deflected electron and by difference
with the energy of the beam, the energy of the photon going into the CU could
be determined. In Fig. 5.4, the energy distributions measured by the CU and the
tagger with 2.5 GeV electrons is shown [131]. The dotted line represents the photon
energies measured by the CU, the dashed line shows the deflected electron energies
measured by the tagger and the solid line is the sum of the two. As can be seen in
the figure, the sum is peaked around the beam energy.
Figure 5.4. The energy distributions from photons measured by the CU (dotted
line), the deflected electrons (dashed line) and the sum of the two (solid line). The
sum is peaked around the electron beam energy at 2.5 GeV (from [131]).
To recover from lost beam time due to accelerator issues, a second set of photon
data was also collected by using a different configuration. In this setup, the CU
5.3. PS facility beam test
55
operated as a stand-alone detector and the tagger information was neglected. The
accepted photons then constituted the full bremsstrahlung spectrum and a faster
read-out rate could be achieved. The direction of the photons were assumed to
coincide with the beam direction given by the detectors before the magnet.
The other particle types (electrons, positrons, protons and pions) were collected
using different configurations. The trigger settings of each particle type are summarised in Table 5.1 [131].
Table 5.1. The different configurations used for the different particle types in the
PS beam test. The trigger is composed of the logical AND of the detectors involved
and a bar over the detector corresponds to the logical NOT. The last column refers
to what particles were tagged by the Cherenkov detectors in order to get the particle
of interest [131].
Particle
γtag
γf b
e−
e+
p
π−
Energy (GeV)
≈0.05–1.5
0–2.5
1, 5
1
6, 10
5
Trigger
C1 C2 S1 S2 Sh S4
C1 C2 S1 S2 Sh
C1 C2 S1 S2 Sh S3
C1 C2 S1 S2 Sh S5
S1 S2 C1 C2 Sh
S1 S2 C1 C2 Sh S3
Magnet
ON
ON
OFF
ON
OFF
OFF
Cherenkov
tag e−
tag e−
tag e−
tag e
tag K
tag µ−
One important topic is the study of the different sources of background that the
Fermi -LAT will encounter in orbit. The following areas have therefore also been
studied with the CERN beam test data.
• Albedo gamma-rays. These are gamma-rays produced when cosmic rays interact with the atmosphere of the Earth and enter the Fermi -LAT from the side
and the back. Some of these can mimic a gamma-ray with normal incidence.
• Hadronic interactions. Protons can interact with the instrument or with the
spacecraft, generating a hadronic cascade that can mimic an electromagnetic
shower in the CAL. To help reject most of these events, the Fermi -LAT
background rejection uses many reconstructed variables such as the transverse
size of the shower and the distance between the first hit in the tracker and
the ACD.
• Charged particles interacting in the Micro-Meteoroid Shield (MMS). If charged
particles enter the instrument, the ACD can be used to reject most of them.
However, if the charged particle interacts with the MMS, photons can be
produced within the Fermi -LAT field-of-view. For this study, an extra scintillator, S5, in front of a small MMS was used. The positrons, used for the
study, were clean from bremsstrahlung photons since the magnet was used to
deflect only the positrons into the CU.
56
Chapter 5. Calibration Unit beam test
The analysis shown in the next chapter was performed on a subset of the total
amount of data collected and the focus has been put on photons and electrons, since
these are most relevant for the dark matter line search described in Chapter 7.
5.4
SPS facility beam test
In the H4 beam line in the SPS facility, secondary beams of electrons, positrons,
pions and protons in the energy range 10–300 GeV were available from a primary
beam of protons at 450 GeV, but also tertiary clean beams of electrons, pions and
protons could be used.
The external detectors and the experimental setup were similar to those in the
PS beam test and are shown in Fig. 5.5 and Fig. 5.6. The S1, S2 and Sh scintillators
composed the external trigger and two helium gas threshold Cherenkov-counters
were used for particle identification. The Sh scintillator consisted in this case of
four 15 × 40 cm2 tiles (denoted Sv1 –Sv4 in the figure), which were arranged to
form a 4 × 4 cm2 hole in the middle. The remaining scintillator, S0, had the same
purpose as in the PS beam test.
Figure 5.5. A sketch of the experimental setup at SPS, showing the locations of
the Cherenkov detectors (C1 and C2 ) and the plastic scintillators (S0, S1, S2, Sv1,
Sv2, Sv3 and Sv4 ) relative to the CU.
Figure 5.6. A photo of the experimental setup at SPS. The CU is located between
the two beam pipes.
5.4. SPS facility beam test
57
The trigger settings for the different particle types are shown in Table 5.2. As
can be seen in the table, the Cherenkov counters were empty for the electron and
pion runs, since the tertiary beams mentioned above were used.
Table 5.2. The different configurations used for the different particle types in the
SPS beam test. The trigger is composed of the logical AND of the detectors involved
and a bar over the detector corresponds to the logical NOT. The last column refers
to what gas was used in the Cherenkov detectors and what particles were tagged by
the counter in order to get the particle of interest.
Particle
e−
p
π−
Energy (GeV)
10–282
20, 100
20
Trigger
S1 S2 Sh
S1 S2 C1 C2 Sh
S1 S2 Sh
Magnet
OFF
OFF
OFF
Cherenkov
empty
He, tag π
empty
58
Chapter 6
Beam test analysis
During the beam test campaign at CERN during 2006, a large amount of data was
collected. This chapter contains the analysis of a sample of that data chosen for
this thesis because of its relevance to dark matter searches. It will start with a
presentation of the general approach of the analysis and what cuts have been made
in order to get as clean a sample as possible. Then, the studies of the three different
observables position, direction and energy are shown.
6.1
Analysis approach
The focus of the analysis presented in this thesis was put on the photon data
collected at the PS facility beam test, since the Fermi -LAT instrument is dedicated
to measuring this type of particle, and the electron data collected at both the
PS and the SPS beam tests. These particles are most relevant for the search of
a spectral line from dark matter, which is described in Chapter 7. The correct
modelling of protons and pions is important for the background rejection but are
not considered in this thesis.
In the SPS beam test, where higher energies could be achieved, no photon data
was collected. The energy resolution at the high end of the Fermi -LAT energy
range, where searches for dark matter annihilation lines are theorised to be more
successful, was therefore determined with electron data. Since both photons and
electrons produce electromagnetic showers, the results should be comparable.
The primary question investigated here is the following. Does the Geant4-based
simulation package developed for the Fermi -LAT reflect reality? Any differences
that might exist in a comparison between measured and simulated data can, in
principle, only have their origin in the following four categories:
• Calibration
• Geometry
59
60
Chapter 6. Beam test analysis
• Software
• Physics
The first category is a problem with the real detector. If the subdetectors
have been incorrectly calibrated in terms of their various thresholds and calibration
constants, the effects would primarily present themselves in variables connected to
energy measurements. However, to the calibration category can also be included
e.g. timing and trigger settings.
The next category points to differences in material and precise geometry. In
reality, imperfections are bound to exist in the form of cracks, misalignments and
impure materials both in the CU, the external detectors and in the beam line.
The last two categories are issues in the simulation package. Software errors
can in many cases produce effects that can be mistaken as an issue belonging to
the other categories and the physics, implemented in Geant4, can in some cases be
simplified and may not account for all subprocesses occurring in reality.
All these issues affect the comparison between measured data and simulated
data. Disentangling them and determining which of the aforementioned categories
each difference belongs to is complicated and can in some cases prove to be impossible. The results obtained can, however, be used to tune the Geant4-based Monte
Carlo (MC) simulation package to better correspond to what is observed.
Any unresolved differences should be taken into account as systematic uncertainties in future physics analyses based on Fermi -LAT data. It should be stressed,
however, that translating differences between data and simulation, that are observed
for the CU, to the Fermi -LAT is non-trivial. Even though the main subdetectors in
the CU are flight spares and, thus, identical to the subdetectors in the Fermi -LAT,
many properties still differ. This includes e.g. the flight-like read-out electronics
used and the geometry and composition of the material surrounding the detector
towers.
The analysis described in this thesis focuses on identifying differences and, where
possible, finding reasons for them. Comparisons between data and simulation can
be done in multiple ways. One way is to calculate containment radii for the distributions and compare them. Another approach is to calculate the statistical
moments. The most important moments are theqmean value and the root-meanP
2
squared (RMS), which in this case is defined as (1/N ) N (xi − xmean ) where
N is the number of bins. Correlations between different variables can also be investigated, in order to determine the origin of any discrepancy. All these approaches
have been exercised in this analysis.
The studies performed by the beam test working group spans over all the four
categories listed above. The calibration has been improved by including nonlinearities in the crystals and corrections for the effects of cross-talk between adjacent crystals and diodes. Using dedicated calibration runs, the parameters determining the asymmetry curve in each CAL crystal and pedestal values have been calculated. New digitisation algorithms have been developed for the tracker. Compar-
6.2. Creating a clean sample
61
isons have also been made between Geant3, Geant4, EGS5 [132] and Mars15 [133]
to track down differences in physics and the material within the detector and in
the beam line has been thoroughly investigated.
The work within the beam test working group has been iterative. This means
that all files have been reprocessed after each correction and comparisons between
data and simulation have then been repeated once again.
6.2
Creating a clean sample
The measured data consists not only of the particles that are of interest. Various
sources of contamination and other effects are also included, which in turn diminish
the quality of the data sample. The most important of these effects are:
• Cosmic-ray contamination
• Beam contamination
• Noise
• Pile-up
• Gaps
In the simulation, the detector is in a background free environment, with no
interference from the effects mentioned above unless they are intentionally put
there. A first step in the analysis procedure must therefore be to create as clean a
data sample as possible.
Cosmic rays, consisting mostly of muons at ground level, affect the measurements in two different ways. Either they coincide with a particle from the beam or
they interact alone in the detector.
In the first case, the muon can create a track the leads into a neighbouring TKR
or CAL module. An example of how this could affect the analysis is if a significant
enough energy deposition is made by the muon in another CAL. This will distort
the reconstructed direction of the beam particle by the CAL. This scenario can be
avoided by cutting on the location of the energy centroid calculated in the CAL,
i.e. by requiring the centroid to be in the right tower.
For the second case, one of the dedicate muon runs, taken at various points
during the beam tests, can be looked at. As explained in Section 1.1, muons are
minimum ionising particles and interact according to the Bethe-Bloch formula.
Most muons will therefore on average deposit a similar amount of energy in the
CAL, thus, forming a peak in the energy spectrum. If an appropriate cut is made
in the total energy deposited in the CAL, most of the muons can be rejected. The
threshold should, however, not be too high, since then a major portion of the correct
events can be rejected as well.
62
Chapter 6. Beam test analysis
In Fig. 6.1, the energy spectrum for a muon run is shown. In the figure, the
peak mentioned above is clearly visible. For this distribution, the 95% quantile can
be calculated. Doing this for the particular muon run gave a threshold placed at
about 267 MeV. This cut was used in all analyses except in the energy resolution
studies, where the threshold was, instead, set at 1000 MeV.
3
Counts
10
2
10
10
1
1
10
2
10
Deposited energy (MeV)
3
10
4
10
Figure 6.1. The energy spectrum of muons.
A second cut, designed to remove coincident muons as well as contamination in
the beam, is to reject energies larger than the beam energy. The energy of e.g. a
bremsstrahlung photon cannot be higher than the energy of the electron that gave
rise to it.
Events caused by noise can be a contributing factor and should be rejected so
that distributions of interest are not affected. The TKR is, however, already a
low noise instrument and in the CAL, various thresholds were set to reject crystal
read-outs with an energy deposition below the threshold. For this reason, no cut
was included purely for the purpose of avoiding noise.
If the rate of particles in the beam is high enough, comparable to the dead
time of the instrument, the resulting pulses measured by the instrument can pile
up and give a false reading. To avoid this, a cut can be made on the time between
consecutive events. In this case, motivated by the characteristics of the electronics,
only events with a time between events that was larger than 0.5 ns were accepted.
A large contribution to differences between data and simulation can come from
geometrical effects in the form of gaps between towers and between CAL crystals.
This means that the fraction of events that go into gaps can be different between
data and simulation, because the beam spots are not exactly the same. This can
have a large impact on the end result.
6.3. Position reconstruction in the CAL
63
To avoid beam background but also as many geometrical effects as possible,
the following cuts can be made. The first cut rejects any events, whose first hit
in the TKR is outside a given perimeter around the beam spot. The position of
the energy centroid is also required to be in the correct tower and not close to
any of the tower edges. Another cut designed to have the same effect makes sure
that the reconstructed track is not close to gaps between towers or between CAL
crystals. These cuts also avoid scenarios where particles deposit energy close to
or directly into the crystal diodes. In these cases, the relation between position
and light yield becomes unreliable and the measured energy in the diodes can have
large fluctuations. To further avoid these effects, all studies have been performed
on data runs with a incoming particle inclination of 0◦ relative to the z axis of the
CU.
To be able to compare the direction and position as measured by the TKR and
CAL, a track in the TKR is required. Therefore, a cut was included on the total
number of potential tracks in the TKR. The requirement was that there should be
at least one.
A final cut was made on the reconstructed directions in the CAL. The corresponding distributions should be monotonically decreasing from the direction of
the beam spot. A small fraction of events, however, had a reconstructed direction
in the CAL that was about 90◦ away from the reconstructed TKR direction. These
events are clear cases of failed reconstructions and the fraction of events with failed
reconstructions can be different for data and simulation. Therefore, only events
with an angle less than the angle where the monotonically decreasing distribution
turns into a monotonically increasing distribution were accepted.
6.3
Position reconstruction in the CAL
When studying the CAL in terms of its position and direction reconstructions, the
TKR can be used as a reference. Quantifying how well data and simulation agree
for position measurements in the CAL was done as follows. An extrapolation of
the track as measured by the TKR module was made from the location of first
hit to the top of the CAL at -47.8 mm along the z axis by using the directional
information reconstructed from the TKR. The same was done for the CAL, but
the extrapolation was then done from the measured energy centroid (the centre
of “gravity” for the reconstructed energy ellipsoid) and up to the top of the CAL
using the directional information reconstructed for the CAL.
The equations used to extrapolate the tracks are given in Eq. 6.1 and Eq. 6.2,
Sext.Tkr[X/Y] = Tkr1[X/Y]0 +
(−47.8 − Tkr1Z0)
,
Tkr1ZDir · Tkr1[X/Y]Dir
(6.1)
(−47.8 − CalZEcntr)
,
CalZDir · Cal[X/Y]Dir
(6.2)
Sext.Cal[X/Y] = Cal[X/Y]Ecntr +
64
Chapter 6. Beam test analysis
where Tkr1[X/Y/Z]0 is the x, y and z coordinate for the first hit in the TKR and for
the best track out of all potential track permutations in the TKR. The two variables
Tkr1[X/Y/Z]Dir and Cal[X/Y/Z]Dir are the so-called directional cosines, i.e. the
cosines of the angles relative to the three coordinate axes and Cal[X/Y/Z]Ecntr is
the x, y and z coordinate of the energy centroid in the CAL for each event.
Once the extrapolated positions were measured at the same point, in this case
at the top of the CAL, the difference between the extrapolated positions from
the TKR and the CAL (the difference between the two equations above) could be
taken. The resulting distributions should be centred at zero and this was confirmed
by observations. The 68% quantile was then taken on the absolute value of the
distributions. The results in the x and y directions for bremsstrahlung photons
from electrons at 2.5 GeV, tagged photons from electrons at 2.5 GeV and electrons
at 5 GeV, are shown in Table 6.1. The relative difference in per cent between the
quantiles of the distributions from data and simulation can be found in Table 6.2
and the distributions from which the quantiles are calculated, normalised by the
number of events, can be seen in Figs. 6.2–6.4.
Table 6.1. The 68% containment of the absolute value of the difference between
the extrapolated TKR position and the extrapolated CAL position for the different
types of particles studied. The values calculated from simulations are denoted by
MC.
Particle
γf b
γtag
e−
X68% (mm)
12.0 ± 0.2
12.9 ± 0.2
4.7 ± 0.1
XM C,68% (mm)
10.6 ± 0.2
10.5 ± 0.1
4.2 ± 0.1
Y68% (mm)
11.6 ± 0.2
12.3 ± 0.2
5.2 ± 0.1
YM C,68% (mm)
10.2 ± 0.1
10.6 ± 0.1
4.3 ± 0.1
Table 6.2. The relative differences in position reconstruction between data and
simulation for the different particle types studied.
Particle
γf b
γtag
e−
6.3.1
∆X68% /X68% (%)
11.8 ± 0.3
18.8 ± 0.4
12.1 ± 0.2
∆Y68% /Y68% (%)
12.0 ± 0.3
14.0 ± 0.3
16.9 ± 0.2
Asymmetry curves
As described in Section 4.2.4, the light asymmetry, i.e. the relation between the
energy deposition and its longitudinal position in each crystal of the CAL, is an
integral part of the CAL calibration procedure. An illustration and a validation of
this asymmetry can be done with the proper data set.
6.3. Position reconstruction in the CAL
65
3
Counts
10
2
10
10
1
0
100
200
300
400
500
|TKR X-CALX| (mm)
600
700
800
100
200
300
400
500
|TKR Y-CALY| (mm)
600
700
800
3
Counts
10
2
10
10
1
0
Figure 6.2. The distribution of positional difference between the TKR and CAL
positions, extrapolated to the top of the CAL, in the x (top) and y direction (bottom)
for bremsstrahlung photons from electrons at 2.5 GeV. The solid red line is data and
the dashed blue line is simulation.
66
Chapter 6. Beam test analysis
3
Counts
10
2
10
10
1
0
100
200
300
400
500
|TKR X-CALX| (mm)
600
700
800
100
200
300
400
500
|TKR Y-CALY| (mm)
600
700
800
3
Counts
10
2
10
10
1
0
Figure 6.3. Same as in Fig. 6.2 but for tagged photons from electrons at 2.5 GeV.
6.3. Position reconstruction in the CAL
67
4
10
3
Counts
10
2
10
10
1
0
100
200
300
400
500
|TKR X-CALX| (mm)
600
700
800
100
200
300
400
500
|TKR Y-CALY| (mm)
600
700
800
4
10
3
Counts
10
102
10
1
0
Figure 6.4. Same as in Fig. 6.2 but for electrons at 5 GeV.
68
Chapter 6. Beam test analysis
During the beam tests, a set of runs were taken, where the impact point of the
beam was stepwise changed along two crystals facing in perpendicular directions in
the centre of the CAL. A schematic of the set of runs can be seen in Fig. 6.5. This
particular set of runs can be used to inspect the asymmetry properties of the CAL
crystals. Full coverage in the form of a 12 x 12 array of impact points would have
been ideal but due to limitations in beam time, a solution with limited coverage
was used instead.
Figure 6.5. Schematic showing the placement of the beam in the calibration runs.
The analysis procedure was as follows. In order for calibration to be successful,
the crystals that have been hit by the trajectory and shower of each incoming
particle must be known. Multiple tracks can cause multiple hit points in the same
crystal, which makes calibration more difficult. Another issue occurs if the track
reconstruction is bad. Then, an extrapolated track might not point to the real
point of impact. Therefore, only events with one single track in the TKR and a χ2
value of the track that was between 1 and 2 were selected for analysis. For muons,
no shower is produced, and therefore no cut on the χ2 value was needed.
The crystal that was hit is deduced by extrapolating the track from the TKR
down to the CAL crystal of interest and selecting only events that lie within the
crystal boundaries.
In the left plot of Fig 6.6, the asymmetry has been plotted as a function of the
TKR position, extrapolated to the level where the crystal of interest is located. For
the calibration runs described above, electrons at 5 GeV and 0◦ incoming angle
were used. In this case, for crystals in the 2nd layer from the top, which is the first
layer that has crystals in the same direction as the beam scanning direction, the
beam deposited most energy in the 7th crystal.
The right plot of Fig 6.6 shows the same thing as the left plot, but the axes
have been binned into 12 bins of equal width. In each bin, the mean asymmetry
was calculated. The bin size was chosen to correspond to approximately the width
of a crystal. The points were fitted, for simplicity, with a quadratic function,
f (x) = p0 + p1 x + p2 x2 , since its precision will suffice for the study shown below.
In the first and the last bin, the position measurement relation fails. For this reason
they were not included in the fit.
For comparison, the same log was studied with muon data, which is also used
for other calibration purposes on ground. The corresponding plots can be seen in
69
0.4
0.4
0.3
0.3
0.2
0.2
Light asymmetry
Light asymmetry
6.3. Position reconstruction in the CAL
0.1
-0
-0.1
-0.2
-0
-0.1
χ2 / ndf
2.479 / 7
p0
6.307e-007 ± 1.646e-007
p1
0.001575 ± 0.000011
p2
-0.005164 ± 0.001284
-0.2
-0.3
-0.4
-200
0.1
-0.3
-150
-100
-50
0
50
100
Extrapolated TKR position (mm)
150
-0.4
200
-150
-100
-50
0
50
Extrapolated TKR position (mm)
100
150
Figure 6.6. The light asymmetry as a function of the extrapolated tracker position
in one calorimeter crystal (layer 2, crystal 7) for electrons at 5 GeV and 0◦ angle. The
left plot illustrates the spread in light asymmetry and the right plot shows the mean
light asymmetry in each bin fitted with a quadratic function, f (x) = p0 +p1 x+p2 x2 .
0.4
0.4
0.3
0.3
0.2
0.2
Light asymmetry
Light asymmetry
Fig. 6.7.
0.1
-0
-0.1
-0.2
-0
-0.1
χ2 / ndf
6.85 / 7
p0
3.118e-007 ± 1.751e-007
p1
0.0016 ± 0.0000
p2
-0.00806 ± 0.00162
-0.2
-0.3
-0.4
-200
0.1
-0.3
-150
-100
-50
0
50
100
Extrapolated TKR position (mm)
150
200
-0.4
-150
-100
-50
0
50
Extrapolated TKR position (mm)
100
150
Figure 6.7. The light asymmetry as a function of the extrapolated tracker position
in one calorimeter crystal (layer 2, crystal 7) for muons. The left plot illustrates the
spread in light asymmetry and the right plot shows the mean light asymmetry in
each bin fitted with a quadratic function, f (x) = p0 + p1 x + p2 x2 .
The design requirements of the CAL crystals state that the position precision
should be at least 30 mm. In Fig. 6.8, the position error in one CAL crystal is
shown for both electrons at 5 GeV and muons. The plots show the difference
between the extrapolated TKR position into the log and the position deduced from
the asymmetry, which is based on the fits in Fig. 6.6 and Fig. 6.7. For electrons,
the 68% containment of the position error (centered at zero) in the selected CAL
crystal was 4.3±0.2 mm. For muons, the equivalent value was 13.1±0.4 mm. The
design requirements are therefore met.
The larger spread for muons comes from the fact that many muons traverse the
crystal at an angle and deposit energy in a larger segment than what a pencil beam
of electrons do, where the incoming particle directions are most of the time roughly
70
Chapter 6. Beam test analysis
perpendicular to the crystal log. The light asymmetries are therefore less accurate
for muons.
2
10
Counts
Counts
102
10
1
1
-100
10
-80
-60
-40
-20
0
20
Position error (mm)
40
60
80
100
-100
-80
-60
-40
-20
0
20
Position error (mm)
40
60
80
100
Figure 6.8. The difference between the extrapolated tracker position and the position of hit deduced from the light asymmetry in one calorimeter crystal (layer 2,
crystal 7) for electrons at 5 GeV (left) and muons (right).
6.4
Direction reconstruction in the CAL
Directional variables in Fermi -LAT data, as explained before, are output in the
form of directional cosines. This means that the cosine of the angles between
the incoming particle direction vector and the three different axis are calculated. A
distribution for direction on which a 68% containment calculation can be performed,
in the same way as for the position reconstruction, can be obtained by using the
following formula.
θ = π−arccos(Tkr1XDir·CalXDir+Tkr1YDir·CalYDir+Tkr1ZDir·CalZDir) (6.3)
where π comes from the fact that the coordinate systems in the TKR and in the
CAL are defined differently. In the TKR a right-handed coordinate system is used
with the positive z-direction pointing in the direction of the beam. In the CAL,
a left-handed coordinate system is used with the negative z axis pointing in the
direction of the beam. To get the proper angle between the two direction vectors,
the one from the TKR and the one from the CAL, a translation of 180◦ must be
done. The expression inside the parenthesis is the scalar product between the two
vectors. Since the vectors are normalised, the lengths of the two vectors are one.
The same three particle types, studied in terms of the position reconstruction,
were also studied here. Figs. 6.9–6.11 show the resulting distributions. The cuts
explained in Section 6.2 have been used and the distributions are again normalised
by the total number of events. The 68% quantiles and the differences between data
and simulation are given in Table 6.3.
6.4. Direction reconstruction in the CAL
71
3
10
Counts
2
10
10
1
0
20
40
60
80
Space angle (deg)
100
120
140
Figure 6.9. The space angle distribution for bremsstrahlung photons from electrons
at 2.5 GeV. The solid red line is data and the dashed blue line is simulation.
3
10
Counts
2
10
10
1
0
20
40
60
80
Space angle (deg)
100
120
140
Figure 6.10. Same as Fig. 6.9 but for tagged photons from electrons at 2.5 GeV.
3
Counts
10
2
10
10
1
0
20
40
60
80
Space angle (deg)
100
120
140
Figure 6.11. Same as Fig. 6.9 but for electrons at 5 GeV.
72
Chapter 6. Beam test analysis
Table 6.3. The 68% containment of the difference between the TKR direction and
the CAL direction for bremsstrahlung photons from electrons at 2.5 GeV, tagged
photons from electrons at 2.5 GeV and electrons at 5 GeV and the difference between
data and simulation (MC).
Particle
γf b
γtag
e−
6.5
Ψ68% (◦ )
11.5 ± 0.1
12.0 ± 0.1
3.9 ± 0.03
ΨM C,68% (◦ )
9.9 ± 0.1
9.8 ± 0.1
3.0 ± 0.02
∆Ψ68% /Ψ68% (%)
13.6 ± 0.2
18.4 ± 0.3
22.1 ± 0.2
Energy reconstruction in the CAL
Many aspects of the energy reconstruction can be studied with the collected data.
For this thesis, raw energy distributions, shower profiles in the longitudinal direction
and energy resolutions were studied.
6.5.1
Raw energy distributions
There are several variables in the data that are related to energy measurements.
Among these are the layer-wise raw energies measured in the CAL and the sum
of all energy depositions in all crystals in the CAL (after diode calibrations). The
layer-by-layer approach offers a more in-depth look than simply looking at the
total energy. In this case, the figures-of-merit are the statistical moments, or more
specifically the mean value and the RMS.
In Fig. 6.12, the difference between data and simulation in terms of the statistical
moments, in all the eight layers of the CAL, are plotted. The plots, as before,
correspond to bremsstrahlung photons from 2.5 GeV electrons, tagged photons
from electrons at 2.5 GeV and electrons at 5 GeV, respectively. The difference is
less than 10% in all layers for photons and less than 16% for electrons. The trend
is similar in all the three plots, namely that the agreement is better the higher the
layer number is.
6.5.2
Longitudinal profile
In Fig. 6.13, the sum of all longitudinal shower profiles is visualised. In the plots,
the energy deposition is shown as a function of the eight layers in the CAL. The
figure shows that the shower profiles from data and simulation are almost identical.
For photons, the shower maximum is located mostly in the second layer of the
CAL, whereas for electrons, the shower maximum is most common in the middle
layers of the CAL (layers 4 and 5). The difference in shower maximum between
the photon runs and the electron run can be explained by the fact that the beam
energy is a factor of two larger in the electron run. Since the shower maximum
6.5. Energy reconstruction in the CAL
73
(Data-MC)/Data (%)
20
18
Mean
16
RMS
14
12
10
8
6
4
2
0
1
2
3
4
5
Calorimeter layer
6
7
8
6
7
8
6
7
8
(Data-MC)/Data (%)
20
18
Mean
16
RMS
14
12
10
8
6
4
2
0
1
2
3
4
5
Calorimeter layer
(Data-MC)/Data (%)
20
18
Mean
16
RMS
14
12
10
8
6
4
2
0
1
2
3
4
5
Calorimeter layer
Figure 6.12. The difference between data and simulation (MC) in terms of the
mean value (solid line) and the RMS (dashed line) of the energy distributions in
the eight calorimeter layers for bremsstrahlung photons from electrons at 2.5 GeV
(top), tagged photons from electrons at 2.5 GeV (middle) and electrons at 5 GeV
(bottom).
74
Chapter 6. Beam test analysis
changes logarithmically with the incoming energy, the maximum for higher-energy
particles occurs on average later in the CAL.
6.5.3
Energy resolution
The photon tagger was only available in the PS beam line. Therefore, electromagnetic properties at higher energies must by studied with electron data from the
SPS beam line. The behaviour in the high end of the Fermi -LAT energy range
should be studied since the masses of many dark matter particle candidates are
predicted to be located there. A key factor in searches for spectral lines from dark
matter is the energy resolution. The larger the energy resolution, the more photons
from sources other than dark matter are included in a specified bin matched to the
energy resolution. This will decrease the significance of the spectral line and will,
consequently, make the search more difficult.
In Figs. 6.14–6.20, the energy distributions for data and simulation are shown
for electrons at the energies 5, 10, 20, 50, 99, 196 and 282 GeV. The distributions
that are included consist of the measured raw energy in the CAL (CalEnergyRaw),
the available algorithms for energy reconstruction (CalCfpEnergy, CalLkHdEnergy
and EvtEnergyCorr), and just for comparison for the simulation, the true energy
(McEnergy). CalCfpEnergy represents the energy estimated with the profile fitting
method, CalLkHdEnergy contains the energy estimated with the likelihood method
and EvtEnergyCorr is the energy estimated with the parametric method. The three
energy reconstruction algorithms are further described in Section 4.2.4. A spread
in the true energy was included in the simulation, as can be seen in the figures, to
reflect beam conditions.
In Fermi -LAT data, the best of the three energy algorithms is chosen event-byevent and stored in CTBBestEnergy. This variable is not meant to be used with
the CU, since it is part of an algorithm that bases the choice on classificationtree analyses that are optimised for the geometry of the Fermi -LAT. How the
choice is made depends on the version of the software that processes the data, since
improvements are made continuously. For these reasons, CTBBestEnergy was not
included in these plots.
The energy resolutions were calculated by first fitting the tip of each energy
distribution with a single Gaussian function. Since the distributions are not symmetrical in shape, a fit to the whole spectrum would not yield at good fit. Therefore,
the Gaussian fitting interval was restricted to the tip of the peak.
The fit provided an estimate for the value at which the most probable energy
was located. From that point, the events were calculated symmetrically in both
directions around the most probable energy, until 68% of the total number of events
were accounted for. The resulting energy interval divided by two is the equivalent
of a Gaussian standard deviation and yields the relative energy resolution when
divided by the most probable energy.
In Fig. 6.21 and Fig. 6.22, the resulting energy resolutions as a function of the
energy are shown for data and simulation, respectively. Fig. 6.23 shows the relative
Deposited energy (GeV)
6.5. Energy reconstruction in the CAL
75
10-1
10-2
Deposited energy (GeV)
0
1
2
3
4
5
Calorimeter layer
6
7
8
9
1
2
3
4
5
Calorimeter layer
6
7
8
9
1
2
3
4
5
Calorimeter layer
6
7
8
9
10-1
10-2
0
Deposited energy (GeV)
1
-1
10
0
Figure 6.13. The energy deposition as a function of the calorimeter layer for bremsstrahlung photons from electrons at 2.5 GeV (top), tagged photons from electrons
at 2.5 GeV (middle) and electrons at 5 GeV (bottom). At each layer there are two
columns. The left column is data and the right column is simulation, respectively.
76
Chapter 6. Beam test analysis
4
10
CalEnergyRaw
CalCfpEnergy
CalLkHdEnergy
EvtEnergyCorr
3
Counts
10
2
10
10
1
0
2
4
6
Energy (GeV)
8
10
12
4
10
CalEnergyRaw
CalCfpEnergy
CalLkHdEnergy
EvtEnergyCorr
3
10
Counts
McEnergy
2
10
10
1
0
2
4
6
Energy (GeV)
8
10
12
Figure 6.14. The energy distributions for electrons at 5 GeV for data (top) and simulation (bottom) are shown. Included are the measured raw energy (CalEnergyRaw),
the three algorithms for energy reconstruction (CalCfpEnergy, CalLkHdEnergy and
EvtEnergyCorr), and for the simulation, the true energy (McEnergy).
6.5. Energy reconstruction in the CAL
77
4
10
CalEnergyRaw
CalCfpEnergy
CalLkHdEnergy
EvtEnergyCorr
3
Counts
10
2
10
10
1
0
5
10
15
Energy (GeV)
20
25
4
10
CalEnergyRaw
CalCfpEnergy
CalLkHdEnergy
EvtEnergyCorr
3
10
Counts
McEnergy
2
10
10
1
0
5
10
15
Energy (GeV)
20
Figure 6.15. Same as Fig. 6.14 but for 10 GeV.
25
78
Chapter 6. Beam test analysis
4
10
CalEnergyRaw
CalCfpEnergy
CalLkHdEnergy
EvtEnergyCorr
3
Counts
10
2
10
10
1
0
5
10
15
20
25
30
Energy (GeV)
35
40
45
50
4
10
CalEnergyRaw
CalCfpEnergy
CalLkHdEnergy
EvtEnergyCorr
3
10
Counts
McEnergy
2
10
10
1
0
5
10
15
20
25
30
Energy (GeV)
35
40
45
Figure 6.16. Same as Fig. 6.14 but for 20 GeV.
50
6.5. Energy reconstruction in the CAL
79
4
10
CalEnergyRaw
CalCfpEnergy
CalLkHdEnergy
EvtEnergyCorr
3
Counts
10
2
10
10
1
0
20
40
60
Energy (GeV)
80
100
120
4
10
CalEnergyRaw
CalCfpEnergy
CalLkHdEnergy
EvtEnergyCorr
3
10
Counts
McEnergy
2
10
10
1
0
20
40
60
Energy (GeV)
80
100
Figure 6.17. Same as Fig. 6.14 but for 50 GeV.
120
80
Chapter 6. Beam test analysis
4
10
CalEnergyRaw
CalCfpEnergy
CalLkHdEnergy
EvtEnergyCorr
3
Counts
10
2
10
10
1
0
20
40
60
80
100 120 140
Energy (GeV)
160
180
200
220
240
4
10
CalEnergyRaw
CalCfpEnergy
CalLkHdEnergy
EvtEnergyCorr
3
10
Counts
McEnergy
2
10
10
1
0
20
40
60
80
100 120 140
Energy (GeV)
160
180
200
220
Figure 6.18. Same as Fig. 6.14 but for 99 GeV.
240
6.5. Energy reconstruction in the CAL
81
4
10
CalEnergyRaw
CalCfpEnergy
CalLkHdEnergy
EvtEnergyCorr
3
Counts
10
2
10
10
1
0
50
100
150
200
250
300
Energy (GeV)
350
400
450
4
10
CalEnergyRaw
CalCfpEnergy
CalLkHdEnergy
EvtEnergyCorr
3
10
Counts
McEnergy
2
10
10
1
0
50
100
150
200
250
300
Energy (GeV)
350
400
450
Figure 6.19. Same as Fig. 6.14 but for 196 GeV.
82
Chapter 6. Beam test analysis
4
10
CalEnergyRaw
CalCfpEnergy
CalLkHdEnergy
EvtEnergyCorr
3
Counts
10
2
10
10
1
0
100
200
300
400
Energy (GeV)
500
600
700
4
10
CalEnergyRaw
CalCfpEnergy
CalLkHdEnergy
EvtEnergyCorr
3
10
Counts
McEnergy
2
10
10
1
0
100
200
300
400
Energy (GeV)
500
600
Figure 6.20. Same as Fig. 6.14 but for 282 GeV.
700
6.5. Energy reconstruction in the CAL
83
difference between the two. As can be seen in Fig. 6.20, CalLkHdEnergy has a sharp
cut off at 300 GeV.
The reason for the cut off is that the likelihood method has only been extended
to 300 GeV. Since CalLkHdEnergy does not function well at the highest energy at
282 GeV, the value of the energy resolution was left out at this energy.
25
CalEnergyRaw
CalCfpEnergy
CalLkHdEnergy
EvtEnergyCorr
σ E/E (Data) (%)
20
15
10
5
0
10
2
Beam energy (GeV)
10
Figure 6.21. The energy resolutions for data, determined from the different energy
distributions at the energies 5, 10, 20, 50, 99, 196 and 282 GeV.
25
CalEnergyRaw
CalCfpEnergy
CalLkHdEnergy
EvtEnergyCorr
σ E/E (MC) (%)
20
15
10
5
0
10
2
Beam energy (GeV)
10
Figure 6.22. The energy resolutions for the simulations, determined from the
different energy distributions at the energies 5, 10, 20, 50, 99, 196 and 282 GeV.
84
Chapter 6. Beam test analysis
40
CalEnergyRaw
CalCfpEnergy
CalLkHdEnergy
EvtEnergyCorr
30
(Data-MC)/Data (%)
20
10
0
-10
-20
-30
-40
-50
-60
10
2
Beam energy (GeV)
10
Figure 6.23. The difference in energy resolution between data and simulation,
determined from the different energy distributions at the energies 5, 10, 20, 50, 99,
196 and 282 GeV.
In Fig. 6.24, the relative difference in the fitted peak position between data and
simulation is shown. As can be seen in the figure, the peak position in data is larger
at all measured energies but the trend is relatively flat.
30
CalEnergyRaw
CalCfpEnergy
CalLkHdEnergy
EvtEnergyCorr
25
(Data-MC)/Data (%)
20
15
10
5
0
-5
-10
10
2
Beam energy (GeV)
10
Figure 6.24. The difference in the energy of the peak between data and simulation,
determined from the different energy distributions at the energies 5, 10, 20, 50, 99,
196 and 282 GeV.
6.7. Summary and conclusions
6.6
85
Latest developments
The analysis above was performed with a set of simulations in which the so-called
Landau-Pomeranchuk-Migdal (LPM) effect was turned off. The effect governs the
energy dependent suppression of bremsstrahlung in charged particle interactions
and had an incorrect implementation in the Geant4 code as discovered by the beam
test working group. The error caused the differences between data and simulation
to be erroneously large.
After the above analysis was finished, a correct description of the LPM effect
was implemented. However, due to time constraints the above analysis could not
be redone. A preliminary illustration of the difference between the old, the new
and no implementation of the effect can be seen in Fig. 6.25. The figure shows the
ratio between data and simulation in terms of the mean energy deposited in the
8 layers of the CAL for electrons from 10 GeV to 282 GeV at 0◦ incidence angle.
As can be seen in the figure, the impact of the LPM effect is more significant for
higher incident energies and for CAL layers located closer to the TKR. The old
implementation of the LPM effect also shows much larger differences.
Preliminary analyses using the latest simulations, with additional material (in
this case lead) at the end of the beam line, have also indicated an decrease in the
differences between data and simulation. This is illustrated in Fig. 6.26, where the
ratio between data and simulation is shown for the number of clusters in the TKR
and for the mean energy deposited in the CAL layers for electrons from 10 GeV to
282 GeV at 0◦ incidence angle. The ratio for the number of clusters inside an energyand angle-dependent cone centred on the reconstructed axis of the best track and
starting at the head of track 1 is shown for thin converter layer (ilayer= −2.5),
thick converter layers (ilayer= −1.5) and no converter layers (ilayer= −0.5) in
the TKR. As is indicated by Fig. 6.26, the optimal additional material seems to be
somewhere below 0.1X0 , at which point the curves should be flat and the global
scaling factor is about 10%.
6.7
Summary and conclusions
For the figures-of-merit for position reconstruction in the CAL, namely the 68% containment of the distributions with the difference between the extrapolated tracks
from the TKR and CAL, respectively, to the top of CAL, the values for data
are larger by 11.8–18.8% for both electrons and the two types of photons (bremsstrahlung photons and tagged photons). The difference is also clearly seen in the
corresponding distributions, where the distributions for data are less peaked than
for the simulations.
When looking at the position reconstruction in the individual CAL crystals, the
observed 68% containments of the position error in one of the CAL crystal were
4.3 ± 0.2 mm for 5 GeV electrons and 13.1 ± 0.4 mm for muons, which is well below
the design requirement of 30 mm. As explained before, the large difference between
Chapter 6. Beam test analysis
86
Figure 6.25. The ratio between data and simulation for the mean energy deposited in the 8 layers of the CAL for electrons at
energies 10, 20, 50, 99, 196 and 282 GeV and at 0◦ incidence angle for no LPM effect (left), new LPM effect (center ) and old
LPM effect (right).
Figure 6.26. The ratio between data and simulation for the number of clusters in different layers of the TKR and for the mean
energy deposited in the 8 layers of the CAL for electrons at energies 10, 20, 50, 99, 196 and 282 GeV and at 0◦ incidence angle
for no additional material(left), 0.1X0 of additional lead (center ) and 0.2X0 of additional lead (right).
6.7. Summary and conclusions
87
88
Chapter 6. Beam test analysis
electrons and muons can be explained by the difference in incoming angle. Since the
muons are not bundled in a pencil beam, like the electrons, they deposit energy in
a larger segment of the crystals. This makes position determination more difficult
and therefore the distribution is more spread out.
The same tendency, seen for position reconstruction in the CAL, holds for direction reconstruction in the CAL. For the 68% containment of the distribution of
space angles between the TKR direction and the CAL direction, the values for data
are again larger by 13.6–22.1%. In this case, the difference is largest for electrons,
which is also evident from the corresponding figures.
When looking at the individual direction variables in both the TKR and the
CAL, not shown in this thesis, it can be seen that most of the runs exhibit a
large bias in the CAL direction variables compared to the beam direction and
compared to the simulation, whereas TKR direction variables are approximately
coinciding. For the runs used for energy resolution studies, the runs from SPS
manifest a bias that is often, but not always, negative in the x direction and positive
in the y direction. This could point to a misalignment of the CAL with respect
to the TKR, but since other runs, such as the 5 GeV electron run from the PS,
demonstrated unbiased direction distributions, a misalignment is unlikely. The
individual direction variables for the CAL also showed that the distributions for
data are in general broader. Both these effects contribute to the large difference
between data and simulation for position and direction reconstructions but further
and more detailed studies are needed in order to determine what is causing the
effects.
For the dark matter line search, the direction is not a crucial ingredient, except
when the known sources of gamma-rays are masked. Even in that case, there are
few high energy photons coming from these sources and the inclusion of a part of
them would probably not influence the search a great deal. However, one way to
compensate for the observed differences between data and simulation is to make a
conservative choice for the radius of the circle masking the sources.
The figures for the layer-wise energy deposition in the CAL show that the mean
value for data is larger than for the simulation by less than 10% in all layers for
photons. For electrons the difference is largest in the 1st layer where the value for
data is almost 16% greater than for the simulation. The trend for all particle types
is that the agreement is better the higher the layer number is until a turning point
occurs in the 5th layer for tagged photons and in the 6th layer for bremsstrahlung
photons and electrons. For these runs, the minimum difference in the mean value
is less than 1%.
The figures showing the energy resolutions in data and simulation have similar
trends for data and simulation for both the raw energy and two of the three energy
reconstruction algorithms. Only CalLkHdEnergy demonstrates large differences.
This method is in fact the least maintained out of the three algorithms. Furthermore, it is only extended to 300 GeV and divides the energy range up into bins,
which may give rise to bin-edge effects. These factors may explain the strange behaviour at higher energies. Overall, however, with the exception of CalLkHdEnergy,
6.7. Summary and conclusions
89
the differences between data and simulation in terms of the energy resolution are
relatively small and stay below 10% over most of the tested energy range.
Generally speaking, the energy resolution should be worse at higher energies
due to the increasing leakage. As seen in the figure, showing the difference in peak
position between data and simulation, the peak position is consistently about 10%
higher for data than for the simulation. This may partially explain the differences
in energy resolution between data and simulation.
In absolute terms, the energy resolutions in both data and simulation are consistent with the Fermi -LAT science requirements, which state that the energy resolution must be <10% for 100 MeV – 10 GeV and <20% for 10–300 GeV for on-axis
photons, especially those from the corrected energies. The reader is reminded that
a plot of the energy dependence of the Fermi -LAT energy resolution, determined
from simulations, was shown in Fig. 4.12 in Chapter 4.
It should be noted that an energy-dependent bias with respect to the true energy
can be seen in the energy distributions for the simulation. The most probable
energies provided by the parametric method and the likelihood method indicate a
bias, which seems to increase with energy, whereas the peak of the profile method
coincides relatively well with the peak of the true energy at all energies.
Shortly before the analysis presented in this thesis was finished, an error in the
Geant4 code was discovered by the beam test working group. The error was an
incorrect implementation of the so-called Landau-Pomeranchuk-Migdal (LPM) effect, which governs the energy-dependent suppression of bremsstrahlung in charged
particle interactions. The impact of this error should be more significant for higher
energies.
While waiting for a better implementation by the Geant4 team, the LPM effect was turned off in the simulation and the analysis presented in this thesis was
performed on that version of the simulation. The most recent simulations made
within the beam test working group, which include the correct implementation of
the LPM effect and extra material in the form of lead in front of the TKR, have
shown that the large differences in the first layers of the CAL decrease, thereby
flattening the curve for the difference between data and simulation. Some of the
differences thereby reduce to a constant factor. The exact nature of this extra
material that might exists in the beam line or the detector but is missing in the
simulation is still, however, unclear.
To conclude, beam line effects that do not apply to the Fermi -LAT cannot
be ruled out as the source of the differences seen between distributions for data
and simulation, especially since later studies show an increasing agreement with
extra material in the simulations. The differences are, however, likely caused by a
combination of geometrical effects, physics and calibration.
The calibration of the Fermi -LAT is not directly dependent on the beam test
results, since the Fermi -LAT is calibrated on-orbit. Also, the calibration procedure
of e.g. the CAL differs between the beam test and the satellite. As described in
the thesis, muons were used to calibrate the satellite on ground and cosmic ray
protons are used on-orbit. The differences seen in the energy studies may therefore
90
Chapter 6. Beam test analysis
not apply for the Fermi -LAT. The CU and Fermi -LAT differ in many aspects,
but the observed differences in the CU introduce a systematic uncertainty for the
Fermi -LAT that should be noted in science analyses.
Apart from the absolute shift in the peak position, the energy reconstruction
is relatively well reproduced by simulations. This is important for the modelling
of spectral lines that is used in the dark matter line search, since full detector
simulations are used to parametrise the line shape.
In the end, further efforts are needed in order to rule out beam line effects. The
beam test campaign has, however, already been very useful for the understanding
of the detector and with the collected data many instrumental effects and software
errors have been uncovered that would have, otherwise, been difficult to detect.
Chapter 7
Dark matter line search
This chapter focuses on the search for spectral lines from dark matter (DM) in
Fermi -LAT data. It contains a discussion on where a signal can be looked for and
a description of some of the statistical methods that can be used in the search
and their statistical properties. Two statistical methods that are implemented for
spectral line searches, profile likelihood and Scan Statistics, are then reviewed. A
binned profile likelihood method and Scan Statistics are then applied to a simulated
data set called obssim2. Finally, an unbinned profile likelihood method is applied
to almost one year of Fermi -LAT data.
7.1
7.1.1
Initial discussions
Region-of-interest selection
The gamma-ray sky has never been measured at the high end of the Fermi -LAT
energy range. This means that the gamma-ray emissions, which serve as a background to a spectral line from DM annihilations or decays, are largely unknown.
Nevertheless, the optimal region in space that gives the largest signal with respect to
the background depends on the distribution of gamma-rays from these background
sources. It also depends on the distribution of DM in the Universe. Therefore,
the optimal region with respect to one halo profile is not the optimal region for a
different halo profile. Finally, the figure-of-merit (e.g. the signal-to-noise ratio or
the value of the likelihood) for which the optimisation is performed also affects the
result. These factors make the selection of a suitable region-of-interest non-trivial.
For more discussions on the subject, see e.g. [134] and [135].
For the analysis applied to simulated data (presented in Section 7.5), the selected region-of-interest is proposed by Stoehr et al. [134]. In their studies, a DM
distribution with inherent substructure for a “Milky Way-like” galaxy was produced
using N-body simulations, given a flat ΛCDM Universe with assumed parameters.
Within the Milky Way, the dominating background will come from the galactic
91
92
Chapter 7. Dark matter line search
diffuse emission. Motivated by the results from the N-body simulations and accounting for an extragalactic background, an angular window given by a broken
annulus around the galactic centre with a radius from 25◦ to 35◦ but excluding the
region within 10◦ from the galactic plane was used in the paper to determine 3σ
detection limits with Fermi. In this region, the total galactic diffuse emission was
assumed to be zero. How much this assumption affects the result is unclear. The
broken annulus region is shown in black and in galactic coordinates in Fig. 7.1.
80
60
40
B (deg)
20
0
-20
-40
-60
-80
-150
-100
-50
0
L (deg)
50
100
150
Figure 7.1. A visualisation of the two regions-of-interest used in the analysis in
galactic coordinates. The black region is the broken annulus used for simulated data,
and the sum of the gray and the black region defines the region used for measured
data.
For the analysis on measured data from the Fermi -LAT instrument (presented
in Section 7.6), a different region-of-interest was chosen for several reasons. First
of all the assumptions motivating the broken annulus were found to be unrealistic.
Secondly, with the given observation time, a larger region would give better photon
statistics, which in turn would constrain the photon background better.
The chosen region for Fermi -LAT data excludes the galactic plane at |b| < 10◦
while keeping a 20◦ × 20◦ square around the galactic centre. This selection ensures
that a large portion of the dominant background from the galactic diffuse emission is
masked while including the galactic centre, which is a potentially interesting region
for dark matter searches. Thanks to the large remaining region, the background
can be well constrained, which was one of the goals.
In the remaining region, photons coming from a circular region with radius 0.2◦
around the 1233 preliminary point sources found after 11 months of Fermi -LAT
observations were also removed. The motivation for masking these sources is that
a large majority of them have known counterparts in the form of e.g. pulsars and
active galaxies, where significant amounts of dark matter annihilations or decays
are not expected. Due to the large number of point sources in the galactic centre
7.1. Initial discussions
93
region, the removal of the point sources there would have removed most of the
region. Therefore, no point sources were removed within a radius of 1◦ from the
galactic centre. The source cut removes about 5% of the total number of photons
and about 0.4% of the solid angle. The selected region before source removal is
illustrated as the sum of the black and gray regions and in galactic coordinates in
Fig. 7.1.
7.1.2
Halo profile selection
As described in Section 3.4, the distribution of DM in galaxies can be modelled
with different halo models. For the analysis applied to the simulated data, the
NFW profile was used. The galactocentric distance of the Sun was assumed to be
8.5 kpc and the local DM density was set to 0.3 GeV cm−3 .
For the analysis on measured Fermi -LAT data, a variety of halo profiles were
used: the NFW profile with rs = 20 kpc, the Einasto profile with rs = 20 kpc and
a = 0.17 and a shallow isothermal profiles with rs = 5 kpc. The local DM density
was set to 0.4 GeV cm−3 and the maximum values of r were set to ∼ 150 kpc for
the NFW and Einasto profiles and ∼ 100 kpc for the isothermal profile.
7.1.3
Data selection
The event class used for the analysis on Fermi -LAT data is not the standard “diffuse” class (see also Section 4.2.6) but rather an extension of it. It is a subset of
the event class developed for the Fermi -LAT measurements of the isotropic diffuse
gamma-ray emission [23] and has internally been referred to as the “extradiffuse”
class. The extradiffuse class has two cuts on the Merit-level in addition to the standard diffuse class cuts: a) an average charge deposited in the tracker planes that is
less than a specified value in order to veto heavy ions, b) a transverse shower size
in the calorimeter within a specified range expected for electromagnetic showers to
veto against hadronic showers and minimum-ionising particles.
The additional cuts are designed to reduce the charge particle background, which
due to the nature of the background rejection may leak in only at specific energies,
thereby causing spectral features that mimic spectral lines. The extradiffuse class
leads to some loss of the effective area but yields a gamma-ray efficiency of > 90%
as compared to the standard diffuse class. In Fig. 7.2, the effective areas of the
diffuse and extradiffuse classes are shown for events classified as “back” (defined in
Section 4.2.1).
As mentioned in Section 4.2.4, one of the energy correction algorithms is the
profile fitting method. In the analysis of the measured Fermi -LAT data, the energy
provided by the profile method was chosen instead of the standard energy, which
is a composition of the three different energy correction methods. The reason is
mainly due to the fact that the likelihood method (included in the standard energy)
only works up to 300 GeV, at which point there is a sharp cut off as could be seen
in Fig. 6.20.
94
Chapter 7. Dark matter line search
Figure 7.2. A comparison between diffuse and extradiffuse class events in terms of
the effective area for back events.
In addition, since the likelihood method is trained on specific sets of Monte
Carlo simulations and the training is divided into regions in energy, the overall
energy spectrum also suffers from a subtle binning effect, which gives rise to small
step-like spectral features. This behaviour can be seen in Fig. 7.3, where the energy
spectra from the profile method and the likelihood method are compared. In the
figure, steps can be seen at 30, 50, 70 and 100 GeV. The statistical errors have been
omitted for presentational purposes.
Figure 7.3. A comparison between the energy spectra from the profile method and
the likelihood method. A step-like behaviour can be seen in the spectrum from the
likelihood method, which is the result of a binning effect. Statistical errors have
been omitted for presentational purposes.
7.2. Statistical concepts
95
Furthermore, as could be seen in the bottom plot of e.q. Fig. 6.19, the peaks
of the parametric and likelihood methods are both biased with respect to the true
value of the energy, whereas the peak from the profile method seems to be almost
unbiased.
Since the purpose of the spectral line search is to detect small localised spectral
features, the profile method was deemed to be a safer choice. In the simulated
obssim2 data set, however, the profile method could not be chosen because the
default energy in the simulation files could not be changed. Therefore, the standard
energy was used.
The Fermi -LAT data used in the analysis spans over more than 11 months,
ranging from 7 August, 2008, to 21 July, 2009, and was collected in sky survey mode.
The photons that are included have energies between 19.4 GeV and 298.4 GeV.
Additional standard data quality cuts are also performed in order to reduce the
effect of the Earth albedo background. Thus, only events coming from angles
< 105◦ with respect to zenith are accepted. Furthermore, only time intervals when
the angle between zenith and the spacecraft zenith was < 47◦ are accepted.
7.2
Statistical concepts
The most basic concept in statistical theory is probability and the mathematical theory behind it was first developed by Andrey Nikolaevich Kolmogorov in
1933 [136, 137]. According to the theory, probability must satisfy the three Kolmogorov axioms. If Ω is defined to be the set of all elementary and exclusive
(the occurrence of one excludes the occurrence of the others) events Xi , then the
probability of Xi occurring, P (Xi ), satisfies the axioms [138]:
1. P (Xi ) ≤ 0 for all i
2. P (Xi or Xj ) = P (Xi ) + P (Xj )
P
3.
Ω P (Xi ) = 1
7.2.1
Frequentist and Bayesian statistics
There are two schools of thought in statistical theory, frequentist (or classical) and
Bayesian statistics, which both satisfy the Kolmogorov axioms but which differ on
basic principles. In frequentist theory, probability is defined in terms of the relative
frequency of something happening.
In the Bayesian approach, on the other hand, probability can be interpreted
as the degree-of-belief of something happening. The name comes from the use
of Bayes theorem, given in Eq. 7.1, which links the posterior probability P (θi |x̄)
(the probability of the hypothesis θi given the observed data x̄) and the likelihood
function L (x̄|θi ).
96
Chapter 7. Dark matter line search
P (θ|x̄) =
L (x̄|θi ) · P (θi )
P (x̄)
(7.1)
Here, P (θi ) is the prior probability and P (x̄) can be considered to be a normalisation constant, since the sum of the left-hand side over all hypotheses must
be unity.
7.2.2
Confidence intervals
The concept of confidence intervals (CIs) is the construction and estimation of an
interval for some parameter, which contains the true parameter value with some
probability 1 − α. The probability 1 − α is also referred to as the confidence level
(CL).
When a CI is used in the context of some scientific experiment or measurement,
it can be considered to be the error on the parameter. The use of confidence
intervals to report the statistical error of a measurement was first developed by
Jerzy Neyman [139]. To illustrate the concept, a parameter µ and the observed
value of it, x, can be assumed. If the probability density function, P (x|µ), for each
fixed value of µ is assumed to be known, a horizontal acceptance region [x1 , x2 ]
can be drawn for each value of µ such that P (x ∈ [x1 , x2 ]) = 1 − α. The resulting
confidence belt is shown in Fig. 7.4 [140].
When a measurement of x is performed, a vertical line is drawn at the value of
the measurement. The intersections between the line and the confidence belt then
gives the CI [µ1 , µ2 ], which is the union of all values of µ for which the acceptance
region is intersected by the line.
In this thesis, CIs are also used for hypothesis testing, as will be shown in
Section 7.4.
7.2.3
Hypothesis tests
In a statistical test, the goal is often to see whether an observation agrees with a
given hypothesis or not. The hypothesis under consideration is generally referred
to as the null hypothesis and is usually denoted by H0 . Making statements about
the validity of H0 often concerns a comparison with some alternative hypotheses,
usually denoted as H1 , H2 , ..., and the discrimination between the two is often
accomplished by using a so-called test statistic (TS).
Commonly, a hypothesis test is formulated in terms of a decision to accept or
reject H0 . This can be done e.g. by defining a critical region in the probability
density function P (t|H0 ) (where t is the test statistic), such that the probability to
observe a value of t greater than tcrit is α. If the observation, tobs , is in the critical
region, H0 is rejected. The complement of the critical region is generally called the
acceptance region.
7.2. Statistical concepts
97
Figure 7.4. An illustration of a generic confidence belt from which the confidence
interval for a parameter µ can be calculated (from [140]).
7.2.4
Coverage
Coverage is a concept defined for CIs. It means that a fraction (1 − α) of an infinite
set of CIs obtained from an infinite number of identical experiments should contain
the true value of the parameter to be estimated. In other words:
P (s ∈ [s1 , s2 ]) = 1 − α
(7.2)
where s1 and s2 are the lower and upper limit of the CI for the parameter s. A
method with this property satisfied is said to have nominal coverage. If instead,
P (s ∈ [s1 , s2 ]) < 1 − α, the intervals “undercover” for that s. Significant undercoverage for any s is a serious flaw [140]. For P (s ∈ [s1 , s2 ]) > 1 − α, the intervals
are said to “overcover” for that s. The intervals that overcover for some s and
undercover for no s are “conservative”. Overcoverage is not as big of a problem
as undercoverage but leads to a loss of power, described below. In the context of
hypothesis testing, α is called the type-I error and corresponds to the probability
that the null hypothesis is rejected even though it is true.
7.2.5
Power
Power is a concept defined for hypothesis tests. The power of a test is the probability
that the null hypothesis is rejected when the alternative hypothesis is true. In
other words, power = 1 − β, where β is the probability of a type-II error, i.e. the
probability to accept the null hypothesis when the alternative hypothesis is true.
98
Chapter 7. Dark matter line search
When using CIs for hypothesis testing, power is the fraction of cases where s = 0
is not contained in the interval given that s > 0 is true.
7.2.6
Significance
The significance of a given observed signal (usually connected to some alternative
hypothesis) is commonly quoted in physics and can be defined as the probability to
obtain a value less than or equal to the observed value under the null hypothesis.
Despite the fact that the likelihood function is not necessarily a normal distribution, the quoted probability is often related to the standard deviation of a
normal distribution. The standard deviation of a normal distribution corresponds
to a specific cumulative probability, which depends on if the integration is one-sided
or two-sided.
If the integration is two-sided, one standard deviation (1σ), corresponds to
∼68.3%, 2σ corresponds to ∼95.4% and 5σ corresponds to ∼99.99994%. The number of standard deviations n, representing the probability p from a two-sided integration of a normal distribution is given by Eq. 7.3, where erf−1 is the inverse of
the error function.
√
n = erf−1 (p) 2,
7.3
(7.3)
Statistical methods
There are a large number of statistical methods that can be used to search for
spectral lines. For this thesis, a subset of these have been studied in more detail.
In the methods described in this section, nobs corresponds to an observed number
of events, s is a signal parameter and b is a background parameter.
7.3.1
Bayes factor method
In Bayesian theory, a TS can be defined by taking the ratio of two posterior probability distributions, in this case called Bayes factors, one for a null hypothesis
and one for an alternative hypothesis. These are given by Eq. 7.4 and Eq. 7.5,
respectively,
Z
(7.4)
Bfact,H0 = L (nobs |b) P (b) db,
Bfact,H1 =
Z Z
L (nobs |s, b) P (s) P (b) ds db,
(7.5)
if the priors P (s) and P (b) are specified. The priors can e.g. be uniform distributions or Gaussian distributions centred on the most likely values of the true
parameters.
7.3. Statistical methods
7.3.2
99
χ2 method
For the comparison in Section 7.3.5, a non-standard χ2 method was used. In this
method, the TS is given by:
T Sχ2 =
(nobs − nnull )2
(nobs − nnull )2
=
,
√
2
( nnull )
nnull
(7.6)
where nnull is the expected number of events when the null hypothesis is true. The
TS is distributed according to a χ2 distribution and can be used to calculate the
coverage and power, respectively, by requiring that the TS in each experiment is
greater than the quantile of a χ2 distribution that corresponds to the confidence
level.
In a standard multi-bin case, TSχ2 is a sum over all the bins. It can be used
to assess the goodness-of-fit if a model is fitted to the set of bins. The TS is then
distributed according to a χ2 distribution with N − m degrees of freedom, where
N is the number of bins and m is the number of free parameters in the model.
The expectation value of a random variable distributed according to the χ2
distribution is equal to the number of degrees of freedom (NDF) and often the
result of a goodness-of-fit is presented as χ2 /NDF. A value close to one therefore
indicates a good fit, whereas a significantly larger value than one points to a bad
fit that suggests that the proposed model is wrong. A value that is much less than
one may instead mean that the fit is too good, given the size of the measurement
errors. This can therefore signify that the errors have been overestimated or that
they are correlated.
7.3.3
Feldman & Cousins
A popular frequentist technique to calculate CIs in recent years is the technique
suggested by Feldman & Cousins [140]. The method is based on the construction of
an acceptance region for each possible hypothesis (in the way as proposed by Neyman [139]) and fixing the limits of the region by including experimental outcomes
according to rank which is given by the likelihood ratio in Eq. 7.7,
λF C =
L(nobs |s, b)
,
L(nobs |ŝ, b)
(7.7)
where ŝ is the signal parameter most compatible with nobs . In this method, it is
assumed that the background (also called nuisance parameter) is perfectly known.
The usage of the likelihood ratio is motivated by the Neyman-Pearson lemma, which
states that the acceptance region giving the highest power (or the highest signal
purity) is given by the likelihood ratio.
100
7.3.4
Chapter 7. Dark matter line search
Profile likelihood
A standard result in statistical theory is that −2 ln L behaves approximately like a
χ2 distribution with k degrees of freedom (in this case k = 1). An uncertainty in
the background estimate can be treated by maximising the ln-likelihood over the
background estimate with fixed s, in which case the likelihood function (“profile
likelihood”) can be expressed in terms of the signal estimate only [141]. A profile
likelihood ratio can then be formulated as in Eq. 7.8,
λP L =
L nobs |s, b̂ (s)
L(nobs |ŝ, b̂)
,
(7.8)
where ŝ and b̂ are the values of the signal and background parameters that maximise
the likelihood.
The so-called Rao-Cramér-Frechet inequality gives a lower bound on the variance of an estimator. If the second derivative of the likelihood in that inequality is
estimated with the measured data and the maximum likelihood estimates, then it
can be shown after the expansion of the ln-likelihood function in a Taylor series [142]
that:
i2
− ln L θ̂ ± i · σθ̂ = − ln Lmax + ,
2
(7.9)
where θ̂ is the estimate of a parameter θ, σθ̂ is the standard deviation of that
estimate, Lmax is the value of the likelihood function at its maximum (which turns
into a minimum when taking the negative logarithm of the likelihood function) and
i is the number of standard deviations.
This means that the i standard deviation error on the maximum likelihood
estimate of a parameter can be determined by stepping up on the − ln L curve
until the value of − ln L has increased by an amount given by 0.5 · i2 and by finding
the corresponding values on the parameter in those locations.
7.3.5
Method comparison
The methods above can be compared with a toy model, where S ∼ Pois(s + b)
and B ∼ Pois(τ b). The capital letters S and B denote random variables and the
lower case letters s and b correspond to the signal and background parameters
respectively. If the background estimate is taken from a sideband measurement, τ
is the ratio between the size of the background region and the size of the signal
region.
The two random processes above yield two hypotheses, H0 and H1 , with Poisson
likelihood functions given by Eq. 7.10 and Eq. 7.11,
7.3. Statistical methods
101
H0 : L(nS , nB |b) =
H1 : L(nS , nB |s, b) =
bnS e−b bnB e−b
·
,
nS !
nB !
(7.10)
(s + b)nS e−(s+b) bnB e−b
·
,
nS !
nB !
(7.11)
where nS and nB are realisations, or observed values, of the random variables S
and B, respectively.
The most basic approach is to divide the energy spectrum of the data into
a signal region (where signal and background are supposed to be present) and a
background region (where only background is supposed to be present) from which
the contribution of the background to the signal region is estimated [143]. These two
regions correspond to S and B, respectively. In this case, the signal and background
regions are of equal size, which means that τ = 1.
The question of presence of signal (detection) and calculation of CIs are in
general different topics in mathematical statistics (see e.g. [144]). The Bayes factor
method and the frequentist χ2 method described above represent hypothesis tests
whereas the frequentist methods, Feldman & Cousins and profile likelihood, are CI
calculation methods. As demonstrated by ProFinder (described in Section 7.4),
however, also CIs can be used for claiming detection by requiring that s = 0 is not
included in the calculated CI.
The two figures-of-merit, coverage and power, were benchmarked for the methods above, using a set of 1024 toy Monte Carlo experiments (i.e. sets of random
realisations from fixed assumed models). Fig. 7.5 shows a comparison of the coverage for each method as a function of the signal parameter. For the χ2 method and
the Bayes factor method, the null hypothesis was s = s0 . This also means that for
the χ2 method, nnull = s + b ≈ s + nB in Eq. 7.6 in Section 7.3.2.
1
0.98
1-α
0.96
0.94
Profile Likelihood
Feldman & Cousins
Bayesian
χ2
0.92
0.9
0
5
10
15
s0
20
25
30
35
Figure 7.5. The coverage (1 − α), at 99% confidence level and with b = 8, for the
four investigated methods as a function of the signal parameter s0 .
102
Chapter 7. Dark matter line search
The power of each method as a function of the signal parameter is shown in
Fig. 7.6. For the χ2 method and Bayes factor method, the null hypothesis was
s = 0. This corresponds, for the χ2 method, to nnull = b ≈ nB in Eq. 7.6 in
Section 7.3.2.
1
0.8
1-β
0.6
0.4
Profile Likelihood
Feldman & Cousins
Bayesian
χ2
0.2
0
0
5
10
15
20
25
30
35
s
Figure 7.6. The power (1 − β), at 95% confidence level and with b = 8, of the four
investigated methods as a function of the signal parameter s.
As can be seen from Fig. 7.5, the profile likelihood method is the only method
with roughly nominal coverage, even if it comes at the cost of a lower power as seen
in Fig. 7.6, while the other methods show undercoverage. This motivates a further
implementation of the profile likelihood method into a spectral line search for DM.
7.4
Implementations for line search
Two methods were implemented to search for spectral lines in an energy spectrum:
the profile likelihood method, which due to the specific application will hereafter be
referred to as ProFinder (Profile likelihood peak Finder) and an additional method
called Scan Statistics. ProFinder presents a novel way of using profile likelihood
CIs as a means of finding signal peaks in a background distribution. Two different
approaches, based on the profile likelihood principle are presented, a binned and
an unbinned approach. The unbinned ProFinder is, however, more advanced and
utilises more information in the data in order to increase the sensitivity.
7.4.1
Binned ProFinder
For the binned case, the TRolke class in the ROOT software package [145], with
which the profile likelihood CI can be calculated for a single bin, has been used.
In the approach described, the signal is implicitly assumed to be present only in a
7.4. Implementations for line search
103
single bin, an assumption which is strictly not true in this case due to the energy
dispersion in the detector.
First, the spectrum in the interesting energy range is divided into a certain
number of bins of equal width and then the profile likelihood CI is calculated in
each bin. The background estimate needed for the calculation is obtained by fitting
the spectrum with a background model. If there is a narrow spectral line signal in
the spectrum, the fitting should not be significantly affected, unless the line is very
strong. A signal detection at a chosen confidence level occurs when the lower limit
in any of the calculated CIs for the spectrum is greater than zero.
If a potential spectral line signal is located in the middle of two bins, the signal
will be divided between the two bins and the significance of the signal will decrease.
To avoid this, two bin sets can be used, which have a relative shift by one half of
the bin width. For optimal performance, the bin width should be matched to the
energy resolution of the detector. If the chosen width is too large, the sensitivity for
the signal will be lower than in the optimal case, since more background is included
in the bin.
In Fig. 7.7, the detection principle of the binned ProFinder is demonstrated
using a simulated spectrum, which includes some background, given by a powerlaw function, and a spectral line, as measured by some detector, at 260 GeV.
For presentational purposes, the number of events in the simulated signal is large
compared to the observed background in the vicinity the signal. The figure shows
the limits, as calculated by the binned ProFinder, for a confidence level of 5σ in
each bin. As can be seen in the figure, the lower limit at 260 GeV is > 0, which
signifies the detection of a signal.
Figure 7.7. A demonstration of the detection principle for the binned ProFinder.
The dashed red line connects the upper limits and the solid blue line connects the
lower limits in each bin. The limits are calculated from profile likelihood confidence
intervals at 5σ confidence level. The lower limit is > 0 at 260 GeV, which signifies
a detection.
104
7.4.2
Chapter 7. Dark matter line search
Unbinned ProFinder
A model that is well suited for a spectral line search, if the energy dispersion of
the detector and the background distribution can be reasonably well modelled, is
an unbinned profile likelihood model where the likelihood is assumed to be the sum
of the contributions from the signal and the background. The type of likelihood
function can be either composite or extended. The composite model likelihood
function is given by Eq. 7.12,
tot
nY
f · S (Ei ) + (1 − f ) · B (Ei , Γ) ,
L Ē|f, Γ =
(7.12)
i=0
where ntot is the total number of photons in the sample, f and Γ are free parameters
and correspond to the signal fraction and some parameter in the background model
respectively, S (Ei ) is the model of the signal as measured by the detector, B (Ei , Γ)
is the model of the background and Ei is the energy of the i-th photon.
In the extended approach, the likelihood function is slightly different and shown
in Eq. 7.13.
tot
nY
ns · S (Ei ) + nb · B (Ei , Γ)
L Ē|ns , nb , Γ =
(7.13)
i=0
Here, the f is replaced by the two free parameters ns and nb . This also means that
in the composite approach, the total number of events is fixed to what is in the
sample, whereas in the extended approach, the total number of events is fitted.
An unbinned maximum likelihood approach can be constructed by using the
RooFit framework [146] implemented in ROOT. The framework allows for the building of complex probability distributions by the subsequent adding of individual
model components. The likelihood is maximised using MINOS in MINUIT [147].
For the composite model case described above, Eq. 7.9 can be visualised according to Fig. 7.8. In the same way as for the binned case, detection and upper limits
are dealt with according to the value of the lower limit of the confidence interval.
A lower limit on the signal fraction that is > 0 signifies a detection at the specified
confidence level, whereas a lower limit of zero gives an upper limit. In the curves
shown in the figure, the background parameters for each signal fraction are such
that the likelihood is maximised.
Signal model
An crucial component in the unbinned ProFinder is the signal model, S (Ei ). The
signal model in this case refers to how photons of identical energy would be distributed in energy when measured by the Fermi -LAT detector. This distribution
can be determined with simulations as long as the simulations can be trusted to
represent data reasonably well.
7.4. Implementations for line search
105
Figure 7.8. The error on the maximum likelihood estimate of the signal fraction f
can be determined by stepping up on the likelihood curve. The resulting confidence
interval can be used both for detection (right) and for setting an upper limit (left).
As could be seen in Fig. 6.23, the relative difference in energy resolution between
data and simulation for the profile fitting method is small except at the highest
tested energy, which has limited statistics. Spectral line shapes based purely on
beam test data would, however, not be correct, since only a selection of the total
phase space was tested in the beam tests but also because of the differences in the
detector geometry and calibration procedure between the CU and the Fermi -LAT.
Since the observed differences between data and simulation, in terms of the
energy reconstruction, are relatively small, the signal model for the spectral line
search on Fermi -LAT data is based on full detector simulations, using the Geant4based GlastRelease-v17r7. The simulation is very time consuming, so the spectral
lines have been simulated only at the energies 20, 50, 100, 150 and 300 GeV. At
intermediate energies, the signal model is constructed via interpolation. The energy
dispersion of the spectral lines in the detector simulations is asymmetric and difficult
to parametrise for very high statistics. For a relatively large number of events,
however, an approximation consisting of the sum of three Gaussian functions can
be used.
The energy dispersions for the energies mentioned above are shown in Figs. 7.97.13. All energy dispersions in the range 20–300 GeV can be seen in Fig. 7.14. The
red lines correspond to the fitted energy dispersions and the gray lines represent to
interpolated energy dispersions.
Sliding window
Choosing a window in energy, in which data is accepted and in which a signal of
a specific mass is searched for, can be a trade-off between how well constrained
one wishes the background to be and how well modeled one wishes the background
to be. If the background distribution is well understood, a larger window is to
be preferred to better constrain the background. However, if the background is
106
Chapter 7. Dark matter line search
2000
1800
Entries
48269
χ2 / ndf
234 / 190
1600
Counts
1400
1200
1000
800
600
400
200
0
-10
-8
-6
-4
-2
0
2
CalCfpEnergy-McEnergy (GeV)
4
6
8
Figure 7.9. The energy dispersion at 20 GeV, fitted with the sum of three Gaussian
functions.
Entries
χ2 / ndf
900
24063
171 / 189
800
700
Counts
600
500
400
300
200
100
0
-25
-20
-15
-10
-5
0
5
CalCfpEnergy-McEnergy (GeV)
10
15
20
Figure 7.10. The energy dispersion at 50 GeV, fitted with the sum of three Gaussian functions.
7.4. Implementations for line search
107
Entries
11677
χ2 / ndf 113.6 / 90
900
800
700
Counts
600
500
400
300
200
100
0
-60
-40
-20
0
20
CalCfpEnergy-McEnergy (GeV)
40
Figure 7.11. The energy dispersion at 100 GeV, fitted with the sum of three
Gaussian functions.
Entries
10666
χ2 / ndf 90.25 / 87
800
700
600
Counts
500
400
300
200
100
0
-80
-60
-40
-20
0
20
CalCfpEnergy-McEnergy (GeV)
40
60
Figure 7.12. The energy dispersion at 150 GeV, fitted with the sum of three
Gaussian functions.
108
Chapter 7. Dark matter line search
Entries
9804
χ2 / ndf 108.3 / 91
700
600
Counts
500
400
300
200
100
0
-200
-150
-100
-50
0
50
CalCfpEnergy-McEnergy (GeV)
100
150
Figure 7.13. The energy dispersion at 300 GeV, fitted with the sum of three
Gaussian functions.
Figure 7.14. The energy dispersion as a function of the energy. Red lines represent fitted energy dispersions and gray lines are interpolations. The ordinate is in
arbitrary units.
7.4. Implementations for line search
109
variable or slightly differing from the assumed distribution, a smaller window is to be
preferred since the background estimate is then more localised to the surroundings
of a potential signal.
For these reasons, a sliding window in energy, where the size of the window
changes with energy, was used. The window size is defined by the energy resolution
and set to extend four times the mean value of the three standard deviations of
the Gaussian functions (describing the signal shape) in each direction around the
spectral line energy. The resulting extent in energy of all windows can be seen in
Fig. 7.15.
20
18
16
Window number
14
12
10
8
6
4
2
0
0
50
100
150
200
Energy (GeV)
250
300
Figure 7.15. The intervals in energy covered by the individual windows in the
sliding window.
Statistical properties
The power and coverage of the unbinned ProFinder cannot be tested with the simple
toy model in Section 7.3.5. The statistical performance is therefore investigated
with more realistic signal and background models.
The power and coverage for the unbinned ProFinder has been investigated for
three different cases. In the first case, the composite model likelihood is used
and the signal is searched for exactly in the place where it is simulated. The peak
energy has been assumed to be 100 GeV and the signal fraction is constrained to be
positive. In the second case, the simulated signal is offset by -5 GeV and therefore
located in between two of the energies which were tested for the measured Fermi LAT data (as presented in Section 7.6). This allows for an inspection of the risk of
missing a signal even though it is there, just because the search is not conducted
at that particular energy. The third case uses the extended formalism instead, and
allows the number of signal events to be also negative.
110
Chapter 7. Dark matter line search
The signal model used for the simulation and for the fitting are identical. Furthermore, the background model is defined from a background-only fit with a
power-law function to the corresponding region in Fermi -LAT data. The background model in the coverage and power studies is also a power-law function, but
the index of the power-law function is a free parameter in the fit, whereas in the
simulation it is fixed.
The coverage for the three cases above for 10000 toy Monte Carlo experiments
at each signal fraction is shown in Fig. 7.16 and a close-up of the region at low
signal fractions is given in Fig. 7.17.
100
90
80
1-α (%)
70
Composite
Composite (offset)
Extended (no constraint)
60
50
40
30
20
10
0
0
10
20
30
Signal fraction (%)
40
50
Figure 7.16. The coverage (1 − α) at 95% confidence level for the unbinned
ProFinder at 100 GeV as a function of the signal fraction for three cases: composite model with the signal fraction constrained to be positive, the first case but
with the simulated line offset by -5 GeV, and extended model with the number of
signal events allowed to take on also negative values. The dashed line marks the
location of nominal coverage.
As can be seen in the figures, the composite case has slight overcoverage at low
signal fractions. This is due to the constraint on the signal fraction as indicated by
the nominal coverage seen in the extended case, where the conditions are otherwise
identical. The overcoverage is also reasonable since there are two ways in which
the true signal parameter can be outside the interval: if it is lower than the lower
limit and if it is higher than the upper limit. In the constrained case, the latter
possibility is blocked, which should lead to overcoverage.
It is also clear that the coverage for the composite offset case decreases with
increasing signal fraction. This makes sense, because at low signal fractions the
assumed offset location of the line does not affect the fit that much and the signal
events mimic to a higher degree a statistical fluctuation of the background. The
7.4. Implementations for line search
111
100
99
Composite
Composite (offset)
Extended (no constraint)
98
1-α (%)
97
96
95
94
93
92
91
90
0
10
20
30
Signal fraction (%)
40
50
Figure 7.17. A close-up of Fig. 7.16.
higher the signal fraction, the more significant the impact of the offset location of
the line is on the fit.
The power in the same three cases described above for 10000 experiments at
each signal fraction is shown in Fig. 7.18. As can be seen in the figure, the power of
the composite case is virtually identical to the power of the extended case, whereas
the power of the composite offset case is lower. This is also expected, since the fit
will be worse when the signal is not in the assumed located. This leads to lower
limits and consequently to a lower power.
A statistical property that can be taken into account is the loss of coverage
when the search for a signal is conducted at multiple locations (corresponding to
multiple trials). This means that if the reality is that there is no signal, it is more
likely that a signal is found anyway (in the form of fluctuations of the background)
if 15 different peak energies are considered than if only a single peak energy is
considered. In other words, by looking at many different peak energies, the false
detection rate is increased.
A trial factor correction that in ideal cases would give the nominal coverage
with the price of worse limits can be performed with a binomial correction. The
p-value corresponding to (1 − p) confidence level is then deduced from P (K = 0) =
n
(1 − p) , where P (K = 0) is the desired confidence level and n is the number of
trials. In Fig. 7.19 a demonstration of the behaviour of lost coverage, for 95% CL
and 18 trials, is shown. As can be seen in the figure, p =5% corresponds to 22%
actual coverage and not 95%. The trial factor corrected p is the one that actually
gives 95% coverage. For this example, this occurs at p ≈ 0.3%.
The limitations of the binomial correction are that it is only true by construction for uncorrelated trials, which is strictly not true in this case since the search
112
Chapter 7. Dark matter line search
100
90
80
1-β (%)
70
Composite
Composite (offset)
Extended (no constraint)
60
50
40
30
20
10
0
0
2
4
6
8
10
Signal fraction (%)
12
14
16
Figure 7.18. The power (1 − β) at 95% confidence level for the unbinned ProFinder
at 100 GeV as a function of the signal fraction for three cases: composite model with
the signal fraction constrained to be positive, the first case but with the simulated
line offset by -5 GeV, and extended model with the number of signal events allowed
to take on also negative values.
100
90
P(K=0) (%)
80
70
60
50
40
30
0
1
2
3
4
5
p (%)
Figure 7.19. The probability of zero successes in n = 18 trials as a function of the
probability of success in one trial.
7.4. Implementations for line search
113
regions (defined by the sliding window) are overlapping, and that overcoverage in
the method will remain even after the binomial correction. It is therefore necessary
to instead investigate the actual coverage for a number of different p-values in order
to find the p-value that gives nominal coverage. This is, unfortunately, very time
consuming and makes the interpretation of the limit at each mass unnecessarily
complicated. Therefore, a trial factor correction has not been applied to the limits
shown in this thesis. It is, however, important to keep in mind that such an effect
exists.
7.4.3
Scan Statistics
Scan Statistics (SS) is a statistical method that can be used to detect a bump
or excess in a uniform spectrum. The method is claimed to work as a powerful
and unbiased alternative to the traditionally used techniques involving χ2 and Kolmogorov distributions. SS has better power than both χ2 and KS, as can be seen
in Fig. 7.20, where s = 20 and b = 100 [148]. This motivates further studies to
apply SS to Fermi -LAT data.
Figure 7.20. The power of Scan Statistics, χ2 and Kolmogorov-Smirnov as a
function of the peak position for s = 20 and b = 100 (from [148]).
In SS, the TS is given by the largest number of events found in any subinterval
of [A, B] of length w(x). The variable bin width, w(x), is used in the case of a
non-uniform spectrum under the null hypothesis, which is the case for the analysis
presented in this thesis, but reduces to a constant, w, if the spectrum under the
null hypothesis is uniform. In mathematical form, the TS is given by Eq. 7.14,
114
Chapter 7. Dark matter line search
T SSS (w(x)) =
max
A≤x≤B−w
{Yx (w(x))} ,
(7.14)
where Yx is the number of events in the x-th bin.
SS differs significantly from standard methods in signal searching that are based
on likelihood ratios. In a likelihood ratio approach (see also Section 7.3.3), the data
set is often fitted with two different models, one with background only (null hypothesis) and the other with background plus signal (alternative hypothesis). The
TS is then compared to a null distribution (often assumed to be a χ2 distribution),
which yields the significance of the signal. In SS, the null distribution must be
created ad hoc for each non-uniform background model. The performance of SS,
however, depends on the uncertainty in the background model and the accuracy of
the variable binning.
Before applying SS to a data set, its performance in terms of the power should
be tested. This can can be done with toy Monte Carlo experiments. For the test,
an energy range from 50 GeV to 350 GeV was chosen, since this is the theoretically
more interesting region for spectral lines from DM. The energy range was divided
into 15 bins of variable width that have the same expected number of events under
the null hypothesis, which is a power-law function with an index of ∼2.5.
For the given background model, a null distribution was first created. A total of
107 random realisations from the power-law model were generated and the TS given
by SS was extracted in
√ each experiment. In each experiment, 1518 events with a
standard deviation of 1518 were generated. An example experiment, from which
the highest number of events was extracted, and the resulting null distribution can
be seen in Fig. 7.21.
To study the power and to see if the power is constant over the energy range, two
sets of toy Monte Carlo experiments were generated with 1000 experiments in each
set. The chosen number of experiments in each set gives a reasonable statistical
accuracy considering the . The same background model was used in both sets. In
each experiment, a signal was generated in addition to the background. In the first
set, the signal was placed in the first bin, whereas in the second set it was placed
in the last bin.
The signal strength was set to give an average reference significance of 4σ,
calculated as the ratio between the number of signal events, ns , and the square
√
root of the number of background events, nb . For each experiment, the TS was
extracted and the power at 99% CL was then given by the probability of finding
the correct bin multiplied by the probability that the significance of the correct bin
exceeded the 99% quantile of the null distribution.
The results from the test can be seen in Table 7.1 and in Fig. 7.22. As can be
seen in the table, the significance from SS is lower than the reference significance at
both ends of the energy range. This is explained by the fact that for SS the signal
is searched for in several bins and not just one, as is assumed when calculating
the reference significance. The resulting trial factor reduces the significance of the
detection and is taken into account in SS via the production of the null distribution.
7.4. Implementations for line search
115
200
180
160
Counts
140
120
100
80
60
40
20
0
50
100
150
200
250
300
350
Energy (GeV)
10-1
-2
10
Probability
-3
10
10-4
-5
10
-6
10
10-7
80
100
120
140
160
180
Maximum number of counts
Figure 7.21. An example experiment (top) and the null distribution from repeated
experiments (bottom) for Scan Statistics.
Table 7.1. The results from testing the bin dependence of the performance
of Scan Statistics (SS) with 1000 experiments. From left to right, the columns
correspond to the bin where the signal was included, in how many experiments SS found the correct bin, the average of the calculated significances from SS
√
and the corresponding average of the calculated reference significances from ns / nb .
Bin
1
15
ncorrect bin
950
953
hSSsign i
≈96.9%
≈97.1%
hRef.sign i
≈99.9%
≈99.9%
As can be seen in the Fig. 7.22, the power and the number of times the correct
bin was found increase with the signal parameter as is expected.
116
Chapter 7. Dark matter line search
1
0.9
0.8
0.7
(1-β)
0.6
0.5
0.4
0.3
0.2
0.1
0
10
20
30
40
50
60
40
50
60
s
1000
900
800
ncorrect bin
700
600
500
400
300
200
100
0
10
20
30
s
Figure 7.22. The power, (1 − β), at 99% confidence level (top) and the number
of experiments in which the correct signal bin was found out of 1000 experiments
(bottom) as a function of the signal parameter s. The background model was a
power-law with b = 1518 and 15 bins with variable widths, matching the power-law,
were used.
7.5
Application on obssim2 data set
The binned ProFinder and ScanStatistics were developed before the launch of the
Fermi satellite. Therefore, these were applied to a simulated data set. The results
from this study are shown in this section. The data set that was used is called
obssim2 but is sometimes referred to as “Service Challenge 2”. The simulation
is based on a parametrisation of the Fermi -LAT instrument response functions
that are implemented in the Fermi -LAT observation simulator gtobssim, developed by the Fermi -LAT Collaboration. The simulator is part of a larger software
package called Science Tools [149], which is partially based on the NASA tool
FTOOLS [150].
7.5. Application on obssim2 data set
117
The obssim2 data set corresponds to one year of normal data taking with 54
occasions of 5 hour pointed observations and the remaining time in sky survey mode.
It should be noted that obssim2 was simulated using an older version of the IRFs
(defined in Section 4.2.7), namely Pass4, and the IRFs have since been improved.
The effective area and energy resolution for Pass4V2 are shown in Fig. 7.23. In
addition, a charged particle background is not included in obssim2 and the data
quality cuts are not same as in Fermi -LAT. The obssim2 data set is therefore in
many senses not realistic and its purpose was mainly to allow for and improve
science analysis development and testing.
handoff thin section (best psf)
handoff thick section
handoff (thick+thin)
10000
Area (cm^2)
8000
6000
4000
2000
0
102
3
10
104
5
10
Energy (MeV)
Figure 7.23. The energy dependence of the effective area and the energy resolution
for Pass4V2 instrument response functions for the Fermi-LAT (from [151]).
A counts map, with all the photons in the vicinity of the galactic centre, can be
seen in Fig. 7.24. A long list of gamma-ray sources are simulated in obssim2 and
four different DM components were included:
• Galactic centre: continuum and 2 lines within 1◦ radius.
• Halo: continuum and 2 lines from 1◦ radius from the galactic centre and
extending to the full sky.
• Extragalatic: continuum and 1 line isotropically distributed over the sky.
• Satellites: continuum only.
Additional information about the different DM components in obssim2 data is
presented in Table 7.2.
As a search region, the broken annulus around the galactic centre, described and
discussed in Section 7.1.1, was selected. This included the sky at a radius from 25◦
118
Chapter 7. Dark matter line search
Table 7.2. The different dark matter components included in the obssim2 data
set. Fluxes are given in units of 10−5 m−2 s−1 .
Source name
Lcc2 GC cont
Lcc2 GC gg
Lcc2 GC gz
Lcc2 halo cont
Lcc2 halo gg
Lcc2 halo gz
Generic extrag cont
Generic extrag gg
Lcc2 clump45
Lcc2 clump10
Generic clump0
L
0
B
0
0
0
176.00
-76.87
-108.48
69.86
26.29
-39.26
Fluxa
508
0.188
0.503
8800
3.26
8.71
4170
3.67
5.32
5.49
9.74
Fluxb
451.02
Fluxc
179.81
DM model
LCC21
7812.90
3114.81
LCC2
3032.73
796.09
GM 12
1.88
1.94
4.93
LCC2
LCC2
GM 23
a
>10 MeV
b
>100 MeV
>1 GeV
LCC2 model (from [152]): WIMP mass = 107.9 GeV, hσvi = 1.64 × 10−26 cm3 s−1 ,
c
1
branching fraction for γγ line is 3.7 × 10−4 and for γZ line 9.9 × 10−4 .
2 Generic model 1: WIMP mass = 100 GeV, hσvi = 3 × 10−26 cm3 s−1 , branching
fraction for γγ line is 10−3 .
3
Generic model 2: WIMP mass = 100 GeV, hσvi = 2.3 × 10−26 cm3 s−1 .
7.5. Application on obssim2 data set
119
30
10000
20
8000
B (deg)
10
6000
0
4000
-10
-20
2000
-30
-30
-20
-10
0
10
20
30
L (deg)
Figure 7.24. A counts map (in galactic coordinates) of the area around the galactic
centre in the obssim2 data set.
to 35◦ , but excluded the region within 10◦ from the galactic plane. The assumption
that the diffuse emission is zero in this region is, however, not true for obssim2.
The final result of any line search can have two obvious outcomes. Either there
is a line signal above a chosen threshold significance or there is not. The way in
which the result is generally presented differ in the two cases. In the former case,
the 68% CI is often calculated around the maximum likelihood estimate and in the
latter case, the 90%, 95% or 99% upper limit is often given.
For this analysis, two of the methods presented in Section 7.4 were chosen,
namely the binned ProFinder and Scan Statistics. In a sense, the two methods
are complementary. The binned ProFinder can find several peaks simultaneously
whereas SS can determine an exact significance (limited by the number of events
in the null distribution). Finding an exact significance with the binned ProFinder
is certainly possible, but for a given detected signal it requires either that the CIs
are calculated at multiple confidence levels with a small step-size in order to find
the confidence level at which the lower limit is no longer zero or that the likelihood
function as a function of the number of signal events can be drawn or accessed in
order to calculate the step-up (see also Section 7.3.4) that has been made. Currently,
neither of the two have been implemented.
In its current state, SS cannot be used for upper limit calculations and therefore
only the results from a peak search are presented. Fig. 7.25 shows the resulting
spectrum when using a variable binning based on a power-law fit of the obssim2
data in the broken annulus. The largest number of events was found in bin 14, but
120
Chapter 7. Dark matter line search
140
120
Counts
100
80
60
40
20
0
50
100
150
200
250
300
350
Energy (GeV)
Figure 7.25. The resulting histogram from the obssim2 data set, when using bins
of variable size based on a power-law fit of the energy spectrum. The largest number
of events is found in bin 14 but only corresponds to ∼1σ when comparing to the null
distribution.
with a significance from the null distribution of only about 1σ.
For the binned ProFinder, the energy spectrum from 30 GeV to 350 GeV for
the broken annulus was divided into 16 bins. A second bin set, with a relative
shift of 10 GeV compared to the first bin set, was also defined. When using binned
ProFinder on the obssim2 data set, there was no 5σ detection. The 5σ CI limits on
the number of signal events can be seen in Fig. 7.26. The conversion into an upper
limit on the flux, using the exposure of the broken annulus region, is described in
the next section.
Neither SS nor the binned ProFinder is able to detect a spectral line signal
from DM in the obssim2 data set. This, however, does not necessarily mean that
the methods themselves are bad. It should be noted that none of the DM models
included in obssim2 are within the sensitivity of the Fermi -LAT [153].
7.5.1
Exposure
The exposure can be defined as the product of the effective area and the integrated
live time for a given direction in space that is within the acceptance of the detector.
Consequently, the exposure is measured in units of cm2 s. Furthermore, as was
explained in Section 4.2.7, the effective area depends on the energy and incident
angle.
The exposure over the sky is almost but not completely uniform due to the
movement of the Fermi satellite with respect to the sky. These movements include
the orbital inclination of the satellite with respect to the Earth’s equator and the
rotational inclination of the Earth with respect to the solar system plane.
7.5. Application on obssim2 data set
121
300
250
s-counts
200
150
100
50
0
50
100
150
200
250
300
350
Energy (GeV)
Figure 7.26. The binned profile likelihood upper limits at 5σ confidence level on
the number of gamma-rays from final states producing spectral lines as a function
of the spectral line energy, calculated from obssim2 data. The lower limits are zero
at all tested peak energies.
In analyses of the data sets, the non-uniform and energy-dependent exposure
must be taken into account. In this case, the simplest approach is to calculate the
average exposure in the region-of-interest. For the obssim2 data set, the exposure
can be calculated with Science Tools. In Fig. 7.27, the exposure of the sky for a
specific energy range is shown. The broken annulus is marked with a dotted line.
9
×10
24
80
23
60
40
B (deg)
22
20
21
0
-20
20
-40
19
-60
-80
-150
-100
-50
0
50
100
150
18
L (deg)
Figure 7.27. The full sky exposure for the obssim2 data in units of cm2 s and in
galactic coordinates. The dotted line corresponds to the broken annulus chosen for
the analysis.
The energy dependence of the exposure is shown in Fig. 7.28. The average value
of the exposures shown in the figure is about 2 × 1010 cm2 s.
122
Chapter 7. Dark matter line search
11
Exposure (cm2 s)
10
10
10
50
100
150
200
250
300
350
Energy (GeV)
Figure 7.28. The energy dependence of the exposure for the broken annulus in
obssim2 data.
7.5.2
Limits
The gamma-ray line flux Φ (E) in units of cm−2 s−1 sr−1 for a specified energy E
can be calculated from the number of signal events ns (E), if the exposure ǫ (E)
and solid angle Ω of the region-of-interest is known.
The solid angle is defined as the fractional area with respect to the surface area
of sphere that a specified region-of-interest has when projected onto the surface of
the sphere and when viewed from the centre of the sphere. The solid angle of the
whole sphere, corresponding to the whole sky, is 4π steradian (sr).
The relation between the variables listed above is given by Eq. 7.15.
Φ (E) =
n (E)
s2 ǫ (E) cm s Ω [sr]
(7.15)
The equation also holds for upper limits and has therefore been used to calculate
the upper limits on the flux from the upper limits on the number of signal events
provided by the binned ProFinder. The resulting upper limits on the flux at 95%
CL are shown in Fig. 7.29. It should be noted that the trial factor pertaining to a
search in multiple bins has not been taken into account in the calculations of the
upper limit.
The relation between the flux from dark matter annihilations into monochromatic gamma-rays and the annihilation cross-section is given by Eq. 3.6 in Section 3.3. From this equation, the velocity-averaged cross-section can be deduced.
Using the upper limit on the flux, the upper limit on the velocity-averaged crosssection, hσviγγ , can be plotted as a function of the dark matter particle mass,
Mχ .
7.5. Application on obssim2 data set
123
-8
Flux (cm-2 s-1 sr -1)
10
-9
10
-10
10
50
100
150
200
250
300
350
Energy (GeV)
Figure 7.29. The binned profile likelihood upper limits at 95% confidence level on
the flux of gamma-rays from final states producing spectral lines as a function of the
spectral line energy, calculated from obssim2 data. The lower limits are zero at all
tested peak energies.
The line-of-sight (LoS) integral, or J(ψ) in Eq. 3.7, is calculated with DarkSUSY,
a publicly available advanced numerical package for DM calculations [89]. As mentioned before, a NFW halo profile is assumed for this analysis. For a given halo
model, the LoS integral is a function of the angle, ψ, which represents the direction
of observation with respect to the galactic centre.
The average of all LoS integrals in the broken annulus is calculated by binning
the full annulus, defined by the inner and outer radius alone, into bins of equal
area and by defining measuring points in the centre of gravity of each bin. A visual
representation of this grid can be seen in Fig. 7.30.
The final result can be seen in Fig. 7.31. The figure shows the 95% upper limit
on the velocity-averaged cross-section for the χχ → γγ process, i.e. hσviγγ , as a
function of the dark matter particle mass (here assumed to be a WIMP), MW IM P .
The limits of the cross-section can improve significantly if the DM density is
increased via DM substructures or steeper DM halo profiles. This is illustrated in
Fig. 7.32, where the upper limits on hσviγγ obtained with the binned ProFinder for
obssim2 have been boosted by a factor of 103 and overlaid on the allowed regions
in SUSY parameter space for two specific SUSY models: MSSM, which was mentioned in Section 3.2, and minimal supergravity (mSUGRA), which is a constrained
version of the MSSM. Experimental bounds from accelerators and measurements of
the cosmic microwave background by the Wilkinson Microwave Anisotropy Probe
have been taken into account in the figure, but not results from direct detection
experiments. The dashed line shows the boosted upper limits, which would exclude the models above the dashed line had they been calculated from an actual
measurement.
124
Chapter 7. Dark matter line search
40
9
30
8
B (deg)
20
10
7
0
6
-10
5
-20
4
-30
-40
-40
-30
-20
-10
0
10
20
30
40
3
L (deg)
Figure 7.30. A visual representation of the grid containing line-of-sight integrals
for the broken annulus region in galactic coordinates.
-25
3
<σ v>γ γ (cm s-1)
10
-26
10
-27
10
50
100
150
200
250
300
MWIMP (GeV)
Figure 7.31. The profile likelihood upper limits at 95% confidence level on the
velocity-averaged cross-section for the process χχ → γγ as a function of the WIMP
mass, calculated from obssim2 data.
7.6
Application on Fermi-LAT data
The results from the DM line search presented in this section have been published
in Physical Review Letters [119]. The overall procedure is almost identical to the
analysis on the simulated obssim2 data set. A peak finding method derives limits
on the number of photons coming from DM annihilations or decays and these are
translated into limits on the flux using the exposure and solid angle of the selected
region.
7.6. Application on Fermi-LAT data
125
Figure 7.32. The profile likelihood upper limits at 95% confidence level from
Fig. 7.31, boosted by a factor of 103 (dashed line). The shaded areas represent the
allowed parameter space for two specific SUSY models, MSSM and mSUGRA.
The analysis on simulated data was mainly used as a test of the procedure
itself and therefore limits were only calculated on the cross-section for a γγ final
state. For Fermi -LAT data, limits are calculated also for the γZ final state and on
both cross-sections and decay lifetimes. In addition, the data and analysis method
selections are different as well as the procedure for calculating limits on the crosssections. Also, as described in Section 7.1.1, the region-of-interest in this analysis
covers a larger portion of the sky, while keeping the theoretically interesting galactic
centre region.
The spatial distribution in galactic coordinates of all photons in the specified
region-of-interest with energies 20–300 GeV in almost one year of Fermi -LAT data
is shown in Fig. 7.33.
As can be seen in the figure, there is elevated activity in the galactic centre
region, as expected due to the strong galactic diffuse emission and the large number
of strong gamma-ray sources there.
The unbinned ProFinder, described in Section 7.4.2, is then used to calculate
limits on the number of photons from DM annihilations and decays into final states
that produce spectral lines. SS is no longer considered for the analysis on Fermi LAT data, mainly due to the time-consuming creation of null distributions and
because multiple lines may exist in the data from different final states but SS is
only able to find the largest one.
In Fig. 7.34, a binned representation of the unbinned fit with the unbinned
126
Chapter 7. Dark matter line search
80
70
60
60
40
50
B (deg)
20
0
40
-20
30
-40
20
-60
10
-80
-150
-100
-50
0
L (deg)
50
100
150
0
Figure 7.33. The spatial distribution in galactic coordinates of photons with energies 20–300 GeV in almost one year of Fermi-LAT data.
ProFinder at 40 GeV is shown. The fit is also the most significant in the analysed
energy range.
A concern one might have is whether the fit is a good fit to the data or not.
The likelihood can not be used to assess the goodness-of-fit, per se. However, one
way to assess it is to construct a binned representation of the unbinned fit result
(as in Fig 7.34 above) and study the residuals as well as the χ2 /NDF value. In
Table 7.3, a summary of some of the quantities related to the fit is shown. It
includes the number of events in the sliding window, the χ2 /NDF values for the
binned representations of the unbinned fits using 20 bins, the maximum likelihood
estimate and its lower and upper limit for each spectral line energy.
As can be seen in the table, there are no major deviations from the optimal value
of 1 in the column for χ2 /NDF. However, many of the values are lower than 1, which
may indicate that the errors are overestimated. A visual inspection of the residuals
can still serve as a sanity check, where large overall deviations from a uniform
residual distribution are sought. Such behaviour was, however, not observed at any
of the tested peak energies. In Fig. 7.35, an example of a residual plot deduced
from Fig. 7.34 is shown. As can be seen, the residuals are fluctuating around zero,
which together with the χ2 /NDF value of 0.68 indicates a good fit.
The resulting limits at 95% CL on the number of signal events as a function
of the peak energy, calculated by multiplying the limits on the signal fraction with
the number of photons in the specified energy window, are shown in Fig. 7.36. All
lower limits were zero, so only the upper limits are shown. This also means that
there was no detection even at 95% CL.
7.6. Application on Fermi-LAT data
127
Table 7.3. The fit results from the unbinned ProFinder applied to Fermi-LAT
data. The number of events in the window, the χ2 /NDF for a binned representation
of the unbinned fit using 20 bins, and for the signal fraction, f , the maximum likelihood estimate and its lower and upper limits, are shown for each spectral line energy.
Energy (GeV)
30
40
50
60
70
80
90
100
110
120
130
140
150
160
170
180
190
200
Events
6514
4466
3304
2625
2121
1732
1466
1251
1212
1181
1125
1059
983
915
857
792
718
669
χ2 /NDF
1.01
0.68
0.95
0.76
0.81
1.19
0.82
1.13
0.93
1.16
1.05
0.93
0.93
1.12
1.35
0.92
0.74
1.08
fM LE
0.0020
0.0140
0.0005
0.0126
6.4429
2.6249
0.0216
7.1475
1.1598
1.7205
0.0125
0.0144
0.0110
2.4399
8.8512
1.9848
1.4184
1.0842
×10−10
×10−8
×10−6
×10−8
×10−8
×10−6
×10−9
×10−10
×10−8
×10−6
fLL
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
fU L
0.0184
0.0347
0.0243
0.0396
0.0189
0.0166
0.0585
0.0370
0.0229
0.0303
0.0506
0.0536
0.0503
0.0367
0.0220
0.0327
0.0383
0.0378
128
Chapter 7. Dark matter line search
600
Counts / (1.4 GeV)
500
400
300
200
100
0
30
35
40
Energy (GeV)
45
50
2
1.8
1.6
- ln (likelihood)
1.4
1.2
1
0.8
0.6
0.4
0.2
0
0
0.005
0.01
0.015
0.02
0.025
Signal fraction
0.03
0.035
Figure 7.34. A binned representation of the unbinned maximum likelihood fit
(top) and the resulting likelihood function (bottom) at an assumed line position of
40 GeV. In the bottom plot, the two cases where the nuisance parameter has a fixed
value at the maximum-likelihood estimate (blue line) and values that maximise the
likelihood at each signal fraction (red line) are shown.
7.6.1
Exposure
The procedure for calculating the exposure is identical to what is described in
Section 7.5.1 for the simulated data set. For the region-of-interest specified for the
search, the exposure in the energy range 100–110 GeV is shown in Fig. 7.37.
The energy dependence of the exposure is given in Fig. 7.38. The average
exposure is about 3 × 1010 cm2 s.
7.6. Application on Fermi-LAT data
129
100
80
(data-model) / model (%)
60
40
20
0
-20
-40
-60
-80
-100
30
35
40
Energy (GeV)
45
50
Figure 7.35. Residuals constructed from the binned representation in Fig. 7.34.
Limits on ns
103
102
10
20
40
60
80
100 120 140
Energy (GeV)
160
180
200
Figure 7.36. The unbinned profile likelihood upper limits at 95% confidence level
on the number of gamma-rays from final states producing spectral lines as a function
of the spectral line energy, calculated from Fermi-LAT data. The lower limits are
zero at all tested peak energies.
7.6.2
Limits
When the galactic centre region is included in the search, the calculation of the
average line-of-sight integral, as was done for obssim2 data, is not precise enough
unless a large number of bins is included. The reason is the steep nature of the
130
Chapter 7. Dark matter line search
9
×10
37
80
36
60
35
40
B (deg)
34
20
33
0
32
-20
31
-40
30
-60
29
28
-80
-150
-100
-50
0
50
100
150
L (deg)
Figure 7.37. Exposure map in galactic coordinates and in the energy range 100–110
GeV for the selected region-of-interest for Fermi-LAT data.
Exposure (cm2 s)
1011
1010
109
0
50
100
150
200
Energy (GeV)
250
300
Figure 7.38. The energy dependence of the exposure in the selected region-ofinterest for Fermi-LAT data.
halo profiles close to the galactic centre. Since a larger amount of bins would also
increase the computational time to an unreasonable level, a different approach was
chosen for the analysis on Fermi -LAT data.
In the alternative approach, a numerical integration of Eq. 3.5 in Section 3.3
using MATLAB is performed for the three chosen halo profiles.
The resulting upper limits on the flux at 95% CL, calculated by using Eq. 7.15,
can be seen in Fig. 7.39. These have been calculated in the same way as the
corresponding limits from the simulated data, i.e. by division with the exposures
(shown in Fig. 7.38) and the solid angle, which is about 10.5 for the selected regionof-interest.
The final upper limits on the velocity-averaged cross-section, hσviγX , and the
7.6. Application on Fermi-LAT data
131
Flux limits (cm-2 s-1 sr-1)
10-9
10-10
10-11
20
40
60
80
100 120 140
Energy (GeV)
160
180
200
Figure 7.39. The unbinned profile likelihood upper limits at 95% confidence level
on the flux of gamma-rays from final states producing spectral lines as a function of
the spectral line energy, calculated from Fermi-LAT data. The lower limits are zero
at all tested peak energies.
final lower limits on the decay lifetime, τγX , for the two cases X = γ and X = Z
at 95% CL are shown in Fig. 7.40 and Fig. 7.41, respectively. These have been
calculated using Eq. 3.4 and Eq. 3.5.
As can be observed when comparing the exposures for the simulated obssim2
data and for the measured Fermi -LAT data, the exposure of the latter is about
a factor of 1.5 larger than the former. The cause of this discrepancy has not
been investigated. That a discrepancy exists is, however, not unreasonable since a
number of different factors are differing in the two cases and can be expected to
contribute significantly to the value of the exposure. For example, the two data
sets are using different versions of the background rejection, the obssim2 data
using Pass4V2 and the Fermi -LAT data using Pass6V3. This has an effect on the
response and effective area of the detector.
An approximate comparison between the calculated upper limit on the flux
from obssim2 data and Fermi -LAT data can be done if the exposure is corrected
to be the same as in Fermi -LAT data and if the difference in solid angle of the
two regions-of-interest and the different ways of calculating it (the method chosen
for Fermi -LAT data being more accurate) are taken into account. However, such a
comparison would not make much sense, since the obssim2 data was never meant
to be realistic but rather a testing platform for the plethora of science analyses
within the Fermi -LAT Collaboration (as was already mentioned in Section 7.5).
As a final cross-check, the upper limit on the flux from Fermi -LAT data using
the unbinned ProFinder is compared with the upper limit on the flux as calculated
by the, in many ways less accurate, binned ProFinder. In the binned case, the
132
Chapter 7. Dark matter line search
10-25
NFW
Einasto
Isothermal
γγ
γZ
3
<σ v>γ X (cm s-1)
10-26
10-27
10-28
20
40
60
80
100 120 140
Energy (GeV)
160
180
200
Figure 7.40. The unbinned profile likelihood upper limits at 95% confidence level
on the velocity-averaged cross-section as a function of the spectral line energy for
three halo profiles. Solid lines correspond to the χχ → γγ channel and dashed lines
represent the χχ → γZ channel.
1030
τγ X (s)
NFW
Einasto
Isothermal
γγ
γZ
1029
1028
20
40
60
80
100 120 140
Energy (GeV)
160
180
200
Figure 7.41. The unbinned profile likelihood lower limits at 95% confidence level on
the decay lifetime as a function of the spectral line energy for three halo profiles. Solid
lines correspond to the χχ → γγ channel and dashed lines represent the χχ → γZ
channel.
signal is basically assumed to be contained within one bin, which is not true in
the strict sense. Furthermore, the whole energy range is fitted by two power-law
functions in the binned case instead of individual power-law functions in sliding
7.7. Summary and conclusions
133
energy windows. Utilising more information should also give more accurate results
(i.e. by performing the fit unbinned instead of binned and by taking into account
the energy dispersion). The two cases are shown in Fig. 7.42. As can be seen in the
figure, the two sets of limits are still relatively consistent if the effects mentioned
above are taken into consideration.
-8
Flux limits (cm-2 s-1)
10
-9
10
-10
10
20
40
60
80
100 120 140
Energy (GeV)
160
180
200
Figure 7.42. The upper limits at 95% confidence level on the flux as a function of
the spectral line energy, as calculated by the binned (dashed) and unbinned (solid)
ProFinder.
7.7
Summary and conclusions
In this chapter, different statistical methods have been benchmarked in terms of
their coverage and power. The tested methods were a Bayesian method with Bayes
factors, Feldman & Cousins, profile likelihood and a non-standard χ2 .
In designing a hypothesis test or a method for confidence interval calculations,
the first requirement is on the probability for a false detection (i.e. how often the
true signal parameter is not contained in the intervals). From the results, it can be
seen that only the profile likelihood method has nominal coverage (nominal rate of
type-I error). It is followed by the Feldman & Cousins method, which ignores the
uncertainties in the background estimate, and the Bayes factor method. The χ2
method undercovers by as much as 10%, probably since it ignores uncertainties in
the background estimate and because it should be less reliable for low statistics.
Allowing more false detections should intuitively imply larger power. The profile
likelihood has the worst power and the χ2 method has the largest power. However,
one needs to keep in mind that using the χ2 method, a detection nominally on 99%
confidence level only corresponds to between 90 and 96 % actual confidence level.
134
Chapter 7. Dark matter line search
Comparing power for methods which do not have the same coverage does not
make much sense. The choice of method should be a two step process. Firstly, the de
facto coverage (or false detection rate) should be calculated. Secondly, the method
with the largest power should be chosen from the ones with similar coverage. Since
a nominal false detection rate is particularly important when searching for new
physics such as dark matter, the profile likelihood method was further developed
into a spectral line search for Fermi -LAT data.
A binned profile likelihood method (referred to in this thesis as the binned
ProFinder) and an additional statistical method called Scan Statistics were implemented for testing on a one-year full-sky simulation called obssim2, but no lines
were found at the 5σ level. As a template, upper limits at 95% confidence level on
the flux and the velocity-averaged cross-section for the γγ final state were placed
with the binned ProFinder and compared to typical allowed regions of parameter
space in MSSM and mSUGRA, where constraints provided by accelerators and the
WMAP results are included. However, fairly large boost factors (of the order of
103 ) are needed to be able to constrain the parameter space further.
That none of the two methods could detect the signal is not surprising, since the
flux of dark matter implemented in the simulation was not within the Fermi -LAT
sensitivity and therefore a significant line signal would most likely not have been
detected by any statistical method.
Both Scan Statistics and the binned ProFinder should perform worse than methods which also include information about the shape of the line. In the methods
applied to simulated data, the bin width was defined to include most of the simulated line. Any information about the line shape was therefore neglected. The
two methods are also expected to perform worse if the measured background can
not be easily parametrised. For Scan Statistics, this would introduce a difficulty in
constructing a set of variable bin widths that gives a uniform spectrum. For the
binned ProFinder, the difficulty would instead be to perform a good fit from which
the background estimates can be drawn. Also, both methods are binned, which
leads to a loss of information. An unbinned method should be more accurate.
Consequently, for the analysis on almost one year of measured Fermi -LAT data,
the profile likelihood method was further developed to include the information
about the energy dispersion (i.e. the line shape), an unbinned fit to the data
and a localised background estimation through a sliding energy window. With
the final implementation (referred to in this thesis as the unbinned ProFinder), a
line detection at photon energies ranging from 30 GeV to 200 GeV (using data
from 20 GeV to 300 GeV) could not be made. The largest “signal” was located at
40 GeV, where the significance was roughly 1.4σ.
To conclude, the upper limits at 95% confidence level, shown in Fig. 7.40, are
still a factor of 10 away from the theoretically interesting MSSM and mSUGRA
parameter spaces seen in Fig. 7.32. However, the limits disfavour, by a factor of
2–5, one model where the wino is the lightest supersymmetric particle [80]. For the
γZ final state, the model predicts hσviγZ ≈ 1.4 × 10−26 cm3 s−1 for Eγ ≈ 170 GeV.
Chapter 8
Discussion and outlook
A number of factors have not been taken into account in the spectral line search.
These include foremost the systematic uncertainty in the exposure and in the energy
dispersion, but also smaller effects such as the absolute shift in energy observed in
the beam test analysis. It is believed that the interpretation of the results will not
change significantly due to the lack of implementation of these factors. They can,
however, most likely be implemented in future versions of the spectral line search.
In theory, it should e.g. be possible to include the systematic uncertainty in the
likelihood model itself.
The region-of-interest chosen for the analysis on Fermi -LAT data is not necessarily the best place to be looking in for a spectral line from dark matter and
it should be noted that it is not optimised with respect to any dark matter halo
profile.
However, any region has its advantages and disadvantages. The galactic centre
has fairly large photon statistics but is affected by source confusion and strong
diffuse photon background. Alternative locations, which may prove to give a better
signal-to-noise ratio include dark matter satellites (substructures containing only
dark matter), dwarf spheroidal galaxies (substructures with optical counterparts
but with high mass-to-light ratios) and galaxy clusters at high galactic latitudes,
where the photon background is lower and the source identification is better. The
extragalactic background may also prove to be a better location. The various
different potential regions have not been studied in this thesis. However, most of
them have dedicated dark matter searches within the Fermi -LAT Collaboration.
A region-of-interest, which is optimised for spectral line searches, should in
principle improve the sensitivity and should therefore be considered in future implementations.
The search for a spectral line from dark matter could probably be improved
further by improving the energy resolution of the Fermi -LAT through a new energy reconstruction algorithm that is better than the ones already in place. The
utilisation of such an algorithm would, by definition, decrease the smearing of the
135
136
Chapter 8. Discussion and outlook
signal over a larger energy range and would, consequently, improve the measured
significance of the signal. The development of a new energy reconstruction algorithm is beyond the scope of this thesis, but should be considered in the future in
order to maximise the chances of finding a spectral line from dark matter.
It can be argued that a slightly larger fraction of charged particles can be accepted if the gain is a better energy resolution, which increases the chance of seeing
a spectral line from dark matter. Another approach is therefore to modify the current selection of gamma-rays in Fermi -LAT data. This implies, more specifically, a
relaxation of the cuts used in rejecting charged particles if the energy resolution is
improved. Due to time constraints, an event class specifically developed for spectral
lines has not been considered for this thesis but can be considered in future work.
On a final note, an extension of the spectral line search to higher energies, to
about 1 TeV should also be considered in future work, in order to close the current
gap in dark matter searches between the Fermi -LAT and ground-based Cherenkov
experiments. Such an analysis, however, also requires more detailed studies of the
high-energy behaviour of the energy reconstruction, since a large portion of the
shower will be outside the detector, and a better understanding of the charged
particle contamination.
Acknowledgements
My deepest gratitude goes to my supervisors, Jan Conrad at Stockholm University
and Staffan Carius at Linnaeus University (formerly University of Kalmar) for
taking care of me and giving me the opportunity to work on such an interesting
and rewarding experiment as Fermi. I would also like to thank my supervisor at
the Royal Institute of Technology, Mark Pearce, for helping me with all things
administrative. A special thanks goes to Jan Conrad, whose skilled guidance kept
me in the master plan and on the right path. Without your help, this work would
have been significantly more impossible.
I gratefully acknowledge the financial support and monthly salary from Linnaeus
University during these four years and the Swedish National Space Board, which
partially funded this work.
I extend my gratitude also to all my friends and colleagues at the Royal Institute
of Stockholm and Stockholm University, who made all of this more enjoyable and
worth the blood, sweat and tears. A special thanks goes to Alexander Sellerholm,
Cecilia Marini Bettolo, Erik Lundström, Karl-Johan Grahn, Mózsi Kiss and Oscar
Larsson for the many useful scientific and technical discussions over the years.
Many thanks also to everyone in the Fermi -LAT Collaboration for your valuable
insights, your help, your patience and your friendly company. I have truly enjoyed
working and spending time with you. I am also grateful to Johan Bregeon, Philippe
Bruel and Joakim Edsjö for providing me with a few of the figures in this thesis. I
also want to thank Elaine Beidatsch at SLAC Human Resources, for reminding me
every once in a while of the importance of finishing my Ph.D.
My gratitude also goes to sugar and caffeine for always being there for me when
I needed your support. Without you, my collection of empty energy drinks would
not be as large.
Thank you, uncle Tapani, for prompting my interest in astronomy so many
years ago. Finally, I want to thank my parents and my girlfriend Aya for believing
in me and for your loving and unconditional support even during times of severe
mental weariness and personal despondency.
137
138
List of Figures
1.1
1.2
1.3
1.4
Muon interactions with matter . . . . . .
Electron interactions with matter . . . . .
Photon interactions with matter . . . . .
Electromagnetic shower from gamma-ray .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
7
9
10
10
2.1
2.2
2.3
2.4
2.5
2.6
Sensitivities of gamma-ray experiments .
Explorer XI detector system . . . . . . .
SAS II detector system . . . . . . . . . .
EGRET detector system . . . . . . . . .
Third EGRET Catalog . . . . . . . . . .
First Fermi -LAT Catalog . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
18
20
21
22
23
24
3.1
Bullet cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
26
4.1
4.2
4.3
4.4
4.5
4.6
4.7
4.8
4.9
4.10
4.11
4.12
4.13
Fermi Gamma-ray Space Telescope . . . . . . . .
Orbit of the Fermi Gamma-ray Space Telescope .
Large Area Telescope . . . . . . . . . . . . . . . .
Gamma-ray conversion in the TKR . . . . . . . .
Fermi -LAT CAL . . . . . . . . . . . . . . . . . .
Fermi -LAT ACD . . . . . . . . . . . . . . . . . .
ACD tile . . . . . . . . . . . . . . . . . . . . . . .
ACD tile overlap . . . . . . . . . . . . . . . . . .
ACD backsplash . . . . . . . . . . . . . . . . . .
Fermi -LAT performance - Point-spread-function
Fermi -LAT performance - Effective area . . . . .
Fermi -LAT performance - Energy resolution . . .
GBM detectors . . . . . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
35
36
37
39
40
41
41
42
42
48
48
48
50
5.1
5.2
5.3
5.4
5.5
Calibration Unit . . . . . . . . .
PS experimental setup (sketch) .
PS experimental setup (photo) .
PS tagging energies . . . . . . . .
SPS experimental setup (sketch)
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
52
53
54
54
56
139
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
140
List of Figures
5.6
SPS experimental setup (photo) . . . . . . . . . . . . . . . . . . . .
56
6.1
6.2
6.3
6.4
6.5
6.6
6.7
6.8
6.9
6.10
6.11
6.12
6.13
6.14
6.15
6.16
6.17
6.18
6.19
6.20
6.21
6.22
6.23
6.24
6.25
6.26
Muon energy spectrum . . . . . . . . . . . . . . . . . . . .
Position reconstruction for photons . . . . . . . . . . . . .
Position reconstruction for tagged photons . . . . . . . . .
Position reconstruction for electrons . . . . . . . . . . . .
Calibration runs . . . . . . . . . . . . . . . . . . . . . . .
Asymmetry curves for electrons . . . . . . . . . . . . . . .
Asymmetry curves for muons . . . . . . . . . . . . . . . .
Position error in CAL crystal . . . . . . . . . . . . . . . .
Space angle distributions for photons . . . . . . . . . . . .
Space angle distributions for tagged photons . . . . . . . .
Space angle distributions for electrons . . . . . . . . . . .
Energy deposition in CAL layers . . . . . . . . . . . . . .
Longitudinal shower profile . . . . . . . . . . . . . . . . .
Energy distributions for 5 GeV electrons . . . . . . . . . .
Energy distributions for 10 GeV electrons . . . . . . . . .
Energy distributions for 20 GeV electrons . . . . . . . . .
Energy distributions for 50 GeV electrons . . . . . . . . .
Energy distributions for 99 GeV electrons . . . . . . . . .
Energy distributions for 196 GeV . . . . . . . . . . . . . .
Energy distributions for 282 GeV . . . . . . . . . . . . . .
Energy resolutions for data at 5–282 GeV . . . . . . . . .
Energy resolutions for the simulations at 5–282 GeV . . .
Energy resolution difference at 5–282 GeV . . . . . . . . .
Energy peak position difference at 5–282 GeV . . . . . . .
Comparison with different LPM-effect implementations . .
Comparison with different amounts of additional material
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
62
65
66
67
68
69
69
70
71
71
71
73
75
76
77
78
79
80
81
82
83
83
84
84
86
87
7.1
7.2
7.3
7.4
7.5
7.6
7.7
7.8
7.9
7.10
7.11
7.12
7.13
7.14
7.15
Regions-of-interest . . . . . . . . . . . . . .
Extradiffuse class effective area . . . . . . .
Likelihood method binning effect . . . . . .
Confidence belt construction . . . . . . . .
Coverage comparison . . . . . . . . . . . . .
Power comparison . . . . . . . . . . . . . .
Detection principle of the binned ProFinder
Maximum likelihood error determination . .
Energy dispersion at 20 GeV . . . . . . . .
Energy dispersion at 50 GeV . . . . . . . .
Energy dispersion at 100 GeV . . . . . . . .
Energy dispersion at 150 GeV . . . . . . . .
Energy dispersion at 300 GeV . . . . . . . .
Fitted and interpolated energy dispersions .
Sliding window in energy . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
92
94
94
97
101
102
103
105
106
106
107
107
108
108
109
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
List of Figures
7.16
7.17
7.18
7.19
7.20
7.21
7.22
7.23
7.24
7.25
7.26
7.27
7.28
7.29
7.30
7.31
7.32
7.33
7.34
7.35
7.36
7.37
7.38
7.39
7.40
7.41
7.42
Coverage of unbinned ProFinder . . . . . . . . . . . .
Coverage of unbinned ProFinder, close-up . . . . . . .
Power of unbinned ProFinder . . . . . . . . . . . . . .
Trial factor . . . . . . . . . . . . . . . . . . . . . . . .
Power of Scan Statistics . . . . . . . . . . . . . . . . .
Null distribution from Scan Statistics . . . . . . . . . .
Power and success rate of Scan Statistics . . . . . . . .
Fermi -LAT performance . . . . . . . . . . . . . . . . .
Galactic centre from obssim2 data . . . . . . . . . . .
Scan Statistics histogram from obssim2 data . . . . .
Signal event limits from obssim2 data . . . . . . . . .
Exposure map for obssim2 data . . . . . . . . . . . .
Energy dependence of exposure for obssim2 data . . .
Flux limits from obssim2 data . . . . . . . . . . . . .
Line-of-sight integrals for broken annulus . . . . . . .
Cross-section limits from obssim2 data . . . . . . . . .
SUSY model exclusions from obssim2 data . . . . . .
Counts map from Fermi -LAT . . . . . . . . . . . . . .
Fit and likelihood curve for Fermi -LAT data . . . . .
Residual plot for Fermi -LAT data . . . . . . . . . . .
Signal event limits from Fermi -LAT data . . . . . . .
Exposure map for Fermi -LAT data . . . . . . . . . . .
Energy dependence of exposure in Fermi -LAT data . .
Flux limits from Fermi -LAT data . . . . . . . . . . . .
Cross-section limits from Fermi -LAT data . . . . . . .
Decay lifetime limits from Fermi -LAT data . . . . . .
Comparison between binned and unbinned ProFinder
141
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
110
111
112
112
113
115
116
117
119
120
121
121
122
123
124
124
125
126
128
129
129
130
130
131
132
132
133
142
List of Tables
4.1
Fermi -LAT and EGRET performances . . . . . . . . . . . . . . . .
49
5.1
5.2
PS configurations . . . . . . . . . . . . . . . . . . . . . . . . . . . .
SPS configurations . . . . . . . . . . . . . . . . . . . . . . . . . . .
55
57
6.1
6.2
6.3
The 68% containment of position reconstruction . . . . . . . . . .
Differences in position reconstruction between data and simulation
The 68% containment of direction reconstruction . . . . . . . . . .
64
64
72
7.1
7.2
7.3
Scan Statistics performance . . . . . . . . . . . . . . . . . . . . . .
Dark matter components in obssim2 . . . . . . . . . . . . . . . . .
Fit results for Fermi -LAT data . . . . . . . . . . . . . . . . . . . .
115
118
127
143
144
Bibliography
[1] W.-M. Yao, et al., Journal of Physics G, 33 (2006) 1.
[2] J. Lindhard & M. Scharff, Physical Review, 124 (1961) 128.
[3] H.H. Andersen & J.F. Ziegler, “Hydrogen: Stopping Powers and Ranges in All
Elements”, Vol. 3 of “The Stopping and Ranges of Ions in Matter”, Pergamon
Press (1977).
[4] H. Bichsel, Physical Review A, 41 (1990) 3642.
[5] F. Schmidt, University of Leeds, UK. “CORSIKA Shower Images”.
http://www.ast.leeds.ac.uk/∼fs/showerimages.html. Visited 6 May,
2010.
[6] K.S. Cheng & G.E. Romero, “Cosmic Gamma-Ray Sources”, Kluwer Academic
Publishers (2004).
[7] T. Bringmann, L. Bergström & J. Edsjö, Journal of High Energy Physics, 01
(2008) 049.
[8] I. Moskalenko, et al., The Astrophysical Journal, 681 (2008) 1708.
[9] N. Giglietto (Fermi -LAT Collaboration), “2009 Fermi Symposium”, eConf
Proceedings C091122, [arXiv:astro-ph/0912.3734].
[10] D. Petry, AIP Conference Proceedings, 745 (2005) 709, [arXiv:astroph/0410487].
[11] A.A. Abdo, et al., (Fermi -LAT Collaboration), Physical Review D, 80 (2009)
122004.
[12] D.J. Thompson, et al., Journal of Geophysical Research, 102 (1997) 14735.
[13] E. Orlando & A.W. Strong, Astronomy & Astrophysics, 480 (2008) 847.
[14] E. Orlando & N. Giglietto, (Fermi -LAT Collaboration), “2009 Fermi Symposium”, eConf Proceedings C091122, [arXiv:atro-ph/0912.3775].
145
146
Bibliography
[15] A.A. Abdo, et al., (Fermi -LAT Collaboration), Physical Review Letters, 103
(2009) 251101.
[16] A.A. Abdo, et al., (Fermi -LAT Collaboration), The Astrophysical Journal Supplement Series, 187 (2010) 460.
[17] E. Fermi, Physical Review, 75 (1949) 1169.
[18] L. Drury, Space Science Reviews, 36 (1983) 57.
[19] T.K. Gaisser, R.J. Protheroe & T. Stanev, The Astrophysical Journal, 492
(1998) 219.
[20] A.A. Abdo, et al.,(Fermi -LAT Collaboration), The Astrophysical Journal, 712
(2010) 459.
[21] A.A. Abdo, et al.,(Fermi -LAT Collaboration), Science, 326 (2009) 1512.
[22] A.A. Abdo, et al.,(Fermi -LAT Collaboration), The Astrophysical Journal, 715
(2010) 429.
[23] A.A. Abdo, et al., (Fermi -LAT Collaboration), Physical Review Letters, 104
(2010) 101101.
[24] H.E.S.S. Collaboration. http://www.mpi-hd.mpg.de/hfm/HESS/. Visited 6
May, 2010.
[25] MAGIC Collaboration. http://wwwmagic.mppmu.mpg.de/. Visited 6 May,
2010.
[26] VERITAS Collaboration. http://veritas.sao.arizona.edu/. Visited 6
May, 2010.
[27] CANGAROO Collaboration. http://icrhp9.icrr.u-tokyo.ac.jp/. Visited
6 May, 2010.
[28] J. Hinton, New Journal of Physics, 11 (2009) 055005.
[29] HAWK Collaboration. http://hawc.umd.edu/. Visited 6 May, 2010.
[30] Milagro Collaboration. http://umdgrb.umd.edu/cosmic/milagro.html. Visited 6 May, 2010.
[31] P. Villard, Comptes rendus, 130 (1900), 1010.
[32] L. Gerward, Physics in Perspective, 1 (1999) 367.
[33] E. Rutherford & E.N. da C. Andrade, Philosophical Magazine Series 6, 27
(1914) 854.
Bibliography
147
[34] E. Rutherford & E.N. da C. Andrade, Philosophical Magazine Series 6, 28
(1914) 263.
[35] R.A. Millikan & G.H. Cameron, Physical Review, 37 (1931) 235.
[36] E. Feenberg & H. Primakoff, Physical Review, 73 (1948) 449.
[37] S. Hayakawa, Progress of Theoretical Physics, 8 (1952) 571.
[38] G.W. Hutchinson, Philosophical Magazine Series 7, 43 (1952) 847.
[39] P. Morrison, Il Nuovo Cimento, 7 (1958) 858.
[40] W.L. Kraushaar & G.W. Clark, Physical Review Letters, 8 (1962) 3.
[41] OSO III experiment. http://heasarc.gsfc.nasa.gov/docs/heasarc/
/missions/oso3.html. Visited 6 May, 2010.
[42] G.W. Clark, G.P. Garmire & W.L. Kraushaar, The Astrophysical Journal, 153
(1968) 203.
[43] R.W. Klebsedal, I.B. Strong & R.A. Olson, The Astrophysical Journal, 182
(1973) 85.
[44] Vela experiment. http://heasarc.gsfc.nasa.gov/docs/vela5b/
/vela5b about.html. Visited 6 May, 2010.
[45] C.E. Fichtel, et al., The Astrophysical Journal, 198 (1975) 163.
[46] G.F. Bignami, et al., Space Science Instrumentation, 1 (1975) 245.
[47] W. Hermsen, Philosophical Transactions of the Royal Society of London A,
301 (1981) 519.
[48] V. Schönfelder, et al., The Astrophysical Journal, 86 (1993) 657.
[49] V. Schönfelder, et al., Astronomy & Astrophysics Supplement Series, 143
(2000) 145.
[50] G. Kanbach, et al., Space Science Reviews, 49 (1988) 69.
[51] R.C. Hartman, et al., The Astrophysical Journal Supplement Series, 123 (1999)
79.
[52] M. Tavani, et al., Nuclear Instruments and Methods in Physics Research A,
588 (2008) 52.
[53] C. Pittori, et al., Astronomy & Astrophysics, 506 (2009) 1563.
[54] A.A. Abdo, et al., (Fermi -LAT Collaboration), [arXiv:astro-ph/1002.2280]
(2010).
148
Bibliography
[55] L. Bergström, Reports on Progress in Physics, 63 (2000) 793.
[56] G. Bertone, D. Hooper & J. Silk, Physics Reports, 405 (2005) 279.
[57] F. Zwicky, Helvetica Physica Acta, 6 (1933) 110.
[58] S. van den Bergh, Publications of the Astronomical Society of the Pacific, 111
(1999) 657.
[59] S. Sarkar, Reports on Progress in Physics, 59 (1996) 1493.
[60] J.A. Tyson, G.P. Kochanski & I.P. Dell’Antonio, The Astrophysical Journal,
498 (1998) L107.
[61] D.N. Spergel, et al., The Astrophysical Journal Supplement Series, 170 (2007)
377.
[62] D. Clowe, et al., The Astrophysical Journal, 648 (2006) L109.
[63] P. Ullio, et al., Physical Review D, 66 (2002) 12.
[64] J.J. Binney & N.W. Evans, Monthly Notices of the Royal Astronomical Society,
327 (2001) L27.
[65] A. Burkert, The Astrophysical Journal, 447 (1995) L25
[66] F.C. van den Bosch, et al., The Astronomical Journal, 119 (2000) 1579.
[67] A. Klypin, A.V. Kravtsov & O. Valenzuela, The Astrophysical Journal, 522
(1999) 82.
[68] J.P. Ostriker, et al., Science, 300 (2003) 1909.
[69] M. Milgrom, The Astrophysical Journal, 270 (1983) 365.
[70] J.D. Bekenstein, Physical Review D, 70 (2004) 083509.
[71] G. Jungman, M. Kamionkowski & K. Griest, Physics Reports, 267 (1996) 195.
[72] M. Taoso, G. Bertone & A. Masiero, Journal of Cosmology and Astroparticle
Physics, 03 (2008) 022.
[73] J.R. Primack, SLAC Beam Line, 31N3 (2001) 50, [astro-ph/0112336].
[74] F. Wilczek, Physical Review Letters, 40 (1978) 279.
[75] S. Weinberg, Physical Review Letters, 40 (1978) 223.
[76] R.D. Peccei & H.R. Quinn, Physical Review Letters, 38 (1977) 1440.
[77] G. Servant & T.M.P. Tait, Nuclear Physics B, 650 (2003) 391.
Bibliography
149
[78] L. Bergström & P. Ullio, Nuclear Physics B, 504 (1997) 27.
[79] L. Bergström, New Journal of Physics, 11 (2009) 105006.
[80] G. Kane, R. Lu, & S. Watson, Physics Letters B, 681 (2009) 151.
[81] M. Gustafsson, et al., Physical Review Letters, 99 (2007) 041301.
[82] G. Bertone, et al, Physical Review D, 80 (2009) 023512.
[83] L. Bergström, et al., Journal of Cosmology and Astroparticle Physics, 04 (2005)
004.
[84] Y. Mambrini, Journal of Cosmology and Astroparticle Physics, 12 (2009) 005.
[85] A. Ibarra & D. Tran, Physical Review Letters, 100 (2008) 061301.
[86] C. Arina, et al., Journal of Cosmology and Astroparticle Physics, 03 (2010)
024.
[87] C.B. Jackson, et al., [arXiv:hep-ph/0912.0004] (2009).
[88] M. Lattanzi & J. Silk, Physical Review D, 79 (2009) 083523.
[89] P. Gondolo, et al., Journal of Cosmology and Astroparticle Physics, 07 (2004)
008.
[90] R. Catena & P. Ullio, [arXiv:astro-ph/0907.0018] (2009).
[91] J.F. Navarro, C.S. Frenk & S.D. White, The Astrophysical Journal, 490 (1997)
493.
[92] J.N. Bahcall & R.M. Soneira, The Astrophysical Journal Supplement Series,
44 (1980) 73.
[93] B. Moore, et al., Monthly Notices of the Royal Astronomical Society, 310 (1999)
1147.
[94] A.V. Kravtsov, The Astrophysical Journal, 502 (1998) 48.
[95] J. Einasto, Trudy Instituta Astrofiziki Alma-Ata, 5 (1965) 87.
[96] D. Merritt, et al., The Astronomical Journal, 132 (2006) 2685.
[97] DAMA/LIBRA Collaboration. http://people.roma2.infn.it/ dama/web/.
Visited 6 May, 2010.
[98] CDMS Collaboration. http://cdms.berkeley.edu/. Visited 6 May, 2010.
[99] IceCube Collaboration. http://icecube.wisc.edu/. Visited 6 May, 2010.
150
Bibliography
[100] D. Hubert (IceCube Collaboration), Nuclear Physics B (Proceedings Supplements), 173 (2007) 87.
[101] F. Halzen & D. Hooper, New Journal of Physics, 11 (2009) 105019.
[102] ANTARES Collaboration. http://antares.in2p3.fr/. Visited 6 May, 2010.
[103] J.D. Zornoza (ANTARES Collaboration), Nuclear Physics B (Proceedings
Supplements), 173 (2007) 79.
[104] PAMELA Collaboration. http://pamela.roma2.infn.it/. Visited 6 May,
2010.
[105] R. Bernabei, et al., The European Physical Journal C, 56 (2008) 333.
[106] Z. Ahmed, et al., [arXiv:astro-ph/0912.3592].
[107] O. Adriani, et al., Nature, 458 (2009) 607.
[108] M. Boezio, et al., New Journal of Physics, 11 (2009) 105023.
[109] L. Bergström, T. Bringmann & J. Edsjö, Physical Review D, 78 (2008) 103520.
[110] J. Chang, et al., Nature, 456 (2008) 362.
[111] A.A. Abdo, et al., (Fermi -LAT Collaboration), Physical Review Letters, 102
(2009) 181101.
[112] D. Grasso, et al., Astroparticle Physics, 32 (2009) 140.
[113] P. Blasi, Physical Review Letters, 103 (2009) 051104.
[114] R. Battiston, et al., Nuclear Instruments and Methods in Physics Research
A, 588 (2008) 227.
[115] A.A. Abdo, et al., (Fermi -LAT Collaboration), [arXiv:astro-ph/1002.2239]
(2010).
[116] A.A. Abdo, et al., (Fermi -LAT Collaboration), Astrophysical Journal, 712
(2010) 147.
[117] P. Scott, et al., Journal of Cosmology and Astroparticle Physics, 01 (2010)
031.
[118] A.A. Abdo, et al., (Fermi -LAT Collaboration), Journal of Cosmology and
Astroparticle Physics, 04 (2010) 014.
[119] A.A. Abdo, et al., (Fermi -LAT Collaboration), Physical Review Letters, 104
(2010) 091302.
[120] W.B. Atwood, et al., Astroparticle Physics, 28 (2007) 422.
Bibliography
151
[121] S. Bergenius, Licenciate Thesis, Royal Institute of Technology, Stockholm,
Sweden, ISBN: 91-7283-754-3 (2004)
[122] A.A. Moiseev, et al., Astroparticle Physics, 27 (2007) 339.
[123] R.E. Kalman, Journal of Basic Engineering D, 82 (1960) 35.
[124] J.A. Hernando, SCIPP preprint 98/18 (1998).
[125] A.A. Abdo, et al., (Fermi -LAT Collaboration), Astroparticle Physics, 32
(2009) 193.
[126] Summary of N-tuples. http://glast-ground.slac.stanford.edu/
/workbook/pages/gleamOvrvw/summaryNtuples.htm. Visited 6 May, 2010.
[127] Fermi -LAT performance (Pass6). http://www-glast.slac.stanford.edu/
/software/IS/glast lat performance.htm. Visited 6 May, 2010.
[128] C. Meegan, et al., [arXiv:astro-ph/0908.0450] (2009).
[129] Geant4 software. http://www.geant4.org. Visited 6 May, 2010.
[130] S. Agostinelli, et al., Nuclear Instruments and Methods in Physics Research
A, 506 (2003) 250.
[131] L. Baldini, et al., AIP Conference Proceedings, 921 (2007) 190.
[132] EGS5 software. http://rcwww.kek.jp/research/egs/egs5.html. Visited 6
May, 2010.
[133] Mars15 software. http://www-ap.fnal.gov/MARS/. Visited 6 May, 2010.
[134] F. Stoehr, et al., Monthly Notices of the Royal Astronomical Society, 345
(2003) 1313.
[135] P.D. Serpico & G. Zaharijas, Astroparticle Physics, 29 (2008) 380.
[136] A.N. Kolmogorov, “Grundbegriffe der Wahrscheinlichkeitsrechnung”, Julius
Springer, Berlin (1933).
[137] A.N. Kolmogorov, “Foundations of the Theory of Probability”, Chelsea Publishing Company, New York (1956).
[138] F. James, “Statistical Methods in Experimental Physics (2nd Edition)”,
World Scientific Publishing Co. Pte. Ltd. (2006).
[139] J. Neyman, Philosophical Transactions of the Royal Society of London A, 236
(1937) 333.
[140] R. D. Cousins & G. J. Feldman, Physical Review D, 57 (1998) 3873.
152
Bibliography
[141] W.A. Rolke, A.M. Lopez & J. Conrad, Nuclear Instruments and Methods in
Physics Research A, 551 (2005) 493.
[142] G. Cowan, “Statistical Data Analysis”, Oxford University Press Inc., New
York (1998).
[143] J. Conrad, J. Scargle & T. Ylinen, AIP Conference Proceedings, 921 (2007)
586.
[144] J. Conrad, invited contribution to “Workshop on Exotic Physics with Neutrino Telescope”, Uppsala, Sweden, Sept. 2006, [arXiv:astro-ph/0612082].
[145] ROOT software. http://root.cern.ch/. Visited 6 May, 2010.
[146] W. Verkerke & D. Kirkby, talk from the “2003 Computing in High Energy and Nuclear Physics” (CHEP03), La Jolla, Ca, USA, March 2003,
[arXiv:physics/0306116].
[147] F. James, “MINUIT Function Minimization and Error Analysis”, CERN
Program Library Long Writeup D506, (1994).
[148] F. Terranova, Nuclear Instruments and Methods in Physics Research A, 519
(2004) 659.
[149] Science Tools software. http://glast-ground.slac.stanford.edu/
/Workbook/sciTools Home.htm. Visited 6 May, 2010.
[150] FTOOLS software. http://heasarc.nasa.gov/lheasoft/ftools/
/ftools menu.html. Visited 6 May, 2010.
[151] Fermi -LAT performance (Pass4). http://www-glast.slac.stanford.edu/
/software/IS/archive latPerformance pass4v2.html. Visited 6 May,
2010.
[152] E.A. Baltz, et al., Physical Review D, 74 (2006) 103521.
[153] E.A. Baltz, et al., Journal of Cosmology and Astroparticle Physics, 07 (2008)
013.
Was this manual useful for you? yes no
Thank you for your participation!

* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project

Download PDF

advertisement