Production of the power distribution boxes for the full ALICE Transition Radiation Detector and the development and integration of their control system

Production of the power distribution boxes for the full ALICE Transition Radiation Detector and the development and integration of their control system
Faculty for Physics and Astronomy
University of Heidelberg
Diploma Thesis
in Physics
submitted by
Michael Neher
born in Waiblingen
2008
Production of the power distribution boxes for the full ALICE
Transition Radiation Detector and the development and integration of
their control system
This diploma thesis has been carried out by Michael Neher at the
Physikalisches Institut
under the supervision of
Helmholtz Young Investigator
Dr. Kai Schweda
Production of the power distribution boxes for the full ALICE Transition Radiation
Detector and the development and integration of their control system
Within this thesis, 18 (+1 spare) power distribution boxes (PDB) were produced based on an
existing prototype developed in an earlier Master thesis. Some improvements were made to
enhance mechanical stability. A PDB teststand consisting of a power control unit (PCU) and
30 DCS boards powered by a Wiener PL512/M power supply was setup at the Physikalisches
Institut in Heidelberg. All 19 PDBs were successfully tested and are ready for installation into
TRD supermodules at the supermodule construction site at University of Münster.
A control system was developed providing a graphical user interface based on the program
package PVSSII. Further, a finite state machine was defined and implemented for automized
operation using the program language SMI++. This system is part of the TRD detector control
system and was installed on the TRD low voltage worker node in the counting room of ALICE.
Commissioning took place in a two weeks ALICE run with cosmic events in December 2007.
During this run the two installed TRD supermodules were successfully operated. The control and
monitoring system developed in this thesis allows for operation of all 18 power distribution boxes
and 4 power control units for full TRD.
Produktion der power distribution boxen für den ALICE Übergangsstrahlungsdetektor und Entwicklung und Integration deren Kontrollsystems
Im Rahmen dieser Arbeit wurden 18 (+1 Reserve) power distribution Boxen (PDB) auf der Basis
eines existierenden Prototypen, der in einer früheren Master Arbeit entwickelt wurde, hergestellt.
Um die mechanische Stabilität zu gewährleisten wurden einige Verbesserungen and dem Prototypen vorgenommen. Ein Teststand für die power distribution Boxen bestehend aus einer power
control unit (PCU) und 30 DCS boards wurde am Physikalischen Institut in Heidelberg aufgebaut.
Der Teststand wird von einem Wiener PL512/M Netzgerät mit Strom und Spannnung versorgt.
Alle 19 PDBs wurden erfolgreich getestet und stehen nun zum Einbau in die TRD Supermodule
in Münster bereit.
Ein Kontrollsystem, das eine graphische Benutzeroberfläche bereitstellt, wurde basierend auf
dem Programmpaket PVSSII entwickelt. Darüberhinaus wurde eine finite state machine zur
automatisierten Ausführung, auf Grundlage der Programmiersprache SMI++, definiert und implementiert. Dieses System ist Teil des TRD Kontrollsystems und wurde auf dem TRD low
voltage worker node im ALICE counting room installiert. Die Inbetriebnahme wurde während
eines zweiwöchigen ALICE runs mit kosmischen Ereignissen durchgeführt. Dabei wurden die
bereits in ALICE installierten TRD Supermodule erfolgreich betrieben. Das in dieser Arbeit
entwickelte Steuerungs- und Kontrollsystem erlaubt die Ansteuerung und Überwachung von 18
power distribution Boxen und 4 power control units für den gesamten TRD.
Contents
1 Introduction
1
2 The Large Hadron Collider
2.1 Accelerator Complex . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.2 The ALICE Experiment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.2.1 The ALICE Online System . . . . . . . . . . . . . . . . . . . . . . . . . . .
4
4
6
9
3 The
3.1
3.2
3.3
Transition Radiation Detector
Detector Design . . . . . . . . . . . . . . . . .
The low voltage system . . . . . . . . . . . .
The DCS Low Voltags System . . . . . . . . .
3.3.1 The Power Control Unit . . . . . . . .
3.3.2 The Power Distribution Box . . . . . .
3.3.3 The Power Distribution Control Board
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
11
11
14
15
16
22
23
4 Production of the Power Distribution Boxes
25
4.1 Hardware Improvements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
4.2 Test Procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
5 The
5.1
5.2
5.3
5.4
Detector Control System
Finite State Machine . . . . . . . . . . . .
PVSS . . . . . . . . . . . . . . . . . . . .
The Distributed Information Management
The Detector Control System of the TRD
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
30
31
31
33
34
6 The Control System for the DCS-board Power-Supply System
6.1 DIM-server to DIM-client Interface . . . . . . . . . . . . . .
6.2 Controlling and Monitoring . . . . . . . . . . . . . . . . . .
6.2.1 The PCU data point type structure in PVSSII . . .
6.2.2 Graphical User Interface . . . . . . . . . . . . . . . .
6.3 Finite State Machine for the Power Control Unit . . . . . .
6.3.1 States in the FSM . . . . . . . . . . . . . . . . . . .
6.3.2 Actions in the FSM . . . . . . . . . . . . . . . . . .
6.4 Software Commissioning . . . . . . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
36
37
39
39
42
45
45
47
48
7 Summary
. . . . .
. . . . .
System
. . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
50
i
Contents
A Mappings
51
B Summary of test results
52
C The overall state
53
D Installation of the PCU project
55
E DCS project distribution at CERN
56
Glossary
57
Bibliography
59
ii
1 Introduction
Quantum chromo dynamics (QCD) is the theory of strong interactions. Asymptotic freedom [1, 2]
is a remarkable feature of QCD, i.e. the interaction between quarks weakens as quarks get closer
to one another. Shortly after the idea of asymptotic freedom was introduced, it was realized that
this has a fascinating consequence. Above a critical temperature and density, quarks and gluons
are freed from their hadronic boundary forming a deconfined phase of matter [3, 4] – a quark
gluon plasma (QGP). Our present world exists at low temperatures and densities with quarks
and gluons confined to the size of hadrons. But shortly after its origin, our universe was of much
higher temperature and density. About 10 µs after the Big Bang, it is thought that all matter
visible today existed as a quark gluon plasma.
Solving QCD in regularized lattice calculations, at vanishing or finite net-baryon density, predicts a cross-over transition from the deconfined thermalized partonic matter to hadronic matter
at a critical temperature Tc ≈ 150–180 MeV [5]. A similar value has been derived in the 1960s
by R. Hagedorn as the limiting temperature for hadrons when investigating hadronic matter [6].
The only way to create and study such a QGP in the laboratory, is the collision of heavy
nuclei at highest center-of-mass energies. A crucial question is to what extent matter is created
in these collisions, i.e. whether local equilibrium is achieved. If the system reaches equilibrium at
least approximately, then temperature, pressure, energy and entropy density can be defined. The
relation amongst these macroscopic parameters is given by the (partonic) equation of state.
Heavy-flavor (c, b) quarks are excellent tools to study the degree of thermalization of the initially
created matter [7]. Due to their large masses ( ΛQCD ), heavy quarks are dominantly created
in early stage perturbative QCD processes. The overall number of heavy quarks is conserved
since their heavy mass is much smaller than the maximum temperature of the medium. Thus
thermal production is negligible. Also, cross sections for heavy quark-antiquark annihilation are
marginal [8]. As shown in Fig. 1.1, the large masses of heavy quarks are almost exclusively
generated through their coupling to the Higgs field in the electro-weak sector, while masses of
light quarks (u, d, s) are dominated by spontaneous breaking of chiral symmetry in QCD. This
means that in a QGP, where chiral symmetry might be restored, light quarks are left with their
bare current masses while heavy-flavor quarks remain heavy.
Frequent interactions at the partonic stage will cause these heavy quarks to participate in
collective motion [9, 10, 11] and finally kinetically equilibrate. This lead to the idea of statistical
hadronization of charm quarks [12]. Calculations predict significant changes in the production of
hidden charm hadrons, e.g. J/ψ [13].
Quarkonia play a key role in research into the quark gluon plasma. In 1986, Satz and Matsui [14]
suggested that the high density of gluons in a quark gluons plasma should destroy charmonium
systems, in a process analogous to Debye screening of the electromagnetic field in a plasma
through the presence of electric charges. Such a suppression was indeed observed by the NA50
collaboration [15] at the super proton synchrotron (SPS). However, absorption of charmonium in
the cold nuclear medium also contributes to the observed suppression [16] and the interpretation
of the SPS data remains inconclusive.
At high collider energies, the large number of charm-quark pairs produced leads to a new
production mechanism for charmonium, either through statistical hadronization at the phase
boundary [12, 17] or coalescence of charm quarks in the plasma [18, 19, 20, 21, 22]. At low
energy, the average number of charm-quark pairs produced in a collision is much lower than
1
Higgs quark mass (MeV)
1 Introduction
10
5
10
4
10
3
10
2
t
Higgs Vacuum
Electroweak symmetry breaking
b
c
s
d
10
QCD Vacuum
!c symmetry breaking
u
1
1
10
10
2
10
3
10
4
10
5
Total quark mass (MeV)
Figure 1.1: Quark masses in the QCD vacuum and the Higgs vacuum. A large fraction of the light quark masses is
due to chiral symmetry breaking in the QCD vacuum while heavy quarks attain almost all their mass from coupling
to the Higgs field. This figure has been taken from Ref. [7].
one, implying that charmonium is always formed from this particular pair. If charm quarks are
abundantly produced (in the order of some tens to a few hundred), charm quarks from different
pairs can combine to form charmonium, see Fig. 1.2. This mechanism works only if heavy charm
quarks can propagate over substantial distance to meet their counterpart. Under these conditions,
charmonium production scales quadratically with the number of charm-quark pairs [24]. Thus
enhancement rather than strong suppression is predicted for high collision energies. This would
be a clear signature of the formation of a quark gluon plasma with deconfined charm quarks and
thermalized light quarks.
The large hadron collider (LHC) at CERN near Geneva, Switzerland, will provide collisions
of nuclei with masses up to that of lead. Unprecedented high center-of-mass energies up to
√
sNN = 5.5 TeV per nucleon-nucleon pair for lead-lead collisions will be achieved. At these energies, heavy quarks are abundantly produced.
A large ion collider experiment (ALICE) detector at LHC will measure most of the heavy quark
hadrons. Open charm hadrons are identified by their displaced decay vertex with high spatial
resolution applying silicon vertex technology. The ALICE transition radiation detector (TRD)
measures production of J/ψ and other quarkonia by identifying electrons and positrons from
electromagnetic decays over a large momentum range. The TRD consists of 540 readout chambers
arranged in 18 supermodules divided in five stacks and six layers. The front-end-electronics of
each readout chamber is equipped with a detector control system (DCS) board for configuration
and monitoring. A DCS board is powered by 4V at up to 1A. For each supermodule, this power is
provided by a power distribution box (PDB) with 30 output channels. In total, 18 PDBs provide
DCS-board power for full TRD. Four power control units (PCU) serve as a redundant and thus
highly reliable interface to the high level control system.
Within this thesis, 18 (+1 spares) power distribution boxes were produced in the electronics
workshop at the Physikalisches Institut at the University of Heidelberg and their performance
successfully tested. The PCUs were further improved based on an existing prototype [25] and
are now installed in the ALICE cavern. A detector control system was developed to operate and
monitor the TRD DCS board power and integrated into the ALICE TRD control system at CERN.
2
J/!
RAA
1.2
RHIC data
1
0.8
0.6
0.4
Model
LHC
RHIC
0.2
0
0
50
100
150
200
250
300
350
Npart
Figure 1.2: Statistical Model predictions for charmonium production relative to normalized p + p collisions for
RHIC (dashed line) and LHC (solid line) energies. The data point is for top RHIC energies as measured by the
PHENIX collaboration [23]. This figure has been taken from Ref. [24] .
This system was successfully commissioned with the two presently installed TRD supermodules.
Thus the complete hardware (4 PCUs and 18 PDBs) to power the DCS board of the TRD readout
chambers and its control system is now available.
This thesis is organized as follows. Chapter 2 gives a short overview of the large hadron collider
and its four main experiments with a closer look at the ALICE detector which incorporates the
TRD. In Chap. 3 the detector design of the TRD is briefly summarized along with a closer look
on the low voltage system of the TRD. Amongst others, the low voltage system provides the
power for the power distribution box (PDB) and the power control unit (PCU). The assembling
and the system overview of the PCU and PDB as developed in [25] is provided in Chap. 3 as
well. Chapter 4 describes the hardware improvements applied to the PCU and PDB as well as
the test procedure for the power distribution boxes. A short introduction to the high level control
system and its tools as used in ALICE is given in Chap. 5. The development and integration of
the graphical user interface for controlling and monitoring the DCS board power supply system,
including PCU and PDB, are explained in detail in Chap. 6. A summary is given in Chap. 7.
3
2 The Large Hadron Collider
The large hadron collider (LHC) is currently under construction at the European organization
for nuclear research (CERN1 ) near Geneva. The LHC will collide two counter rotating beams
of protons or heavy ions at unprecedented high energy and luminosity in a circular tunnel of
27 km circumference. The LHC will provide proton-proton collisions at a design luminosity of
√
1034 cm−2 s−1 and a center-of-mass energy of s = 14 TeV [26]. This exceeds the maximum
Tevatron energy by one order of magnitude. For lead-lead collisions the maximum energy is
√
sNN = 5.5 TeV per nucleon pair at a design luminosity of 1027 cm−2 s−1 . This collision energy
exceeds the relativistic heavy ion collider (RHIC) at the Brookhaven National Laboratory (BNL)
by a factor of 30. The experiment specially designed for heavy ion collisions is a large ion collider
experiment (ALICE). This section gives a brief overview of the accelerator complex and the four
main experiments at LHC.
2.1 Accelerator Complex
A schematic overview of the CERN accelerator system is shown in Fig. 2.1. Protons stemming
from a 90 kV duoplasmatron proton-source are accelerated in the linear accelerator LINAC2 to a
kinetic energy of 50 MeV and then passed to a multi ring proton synchrotron booster (PSB) for
acceleration to 1.4 GeV. In the proton synchrotron (PS) they reach 26 GeV and their bunch patterns are generated. After transfer to the super proton synchrotron (SPS) protons are accelerated
to 450 GeV and injected into the LHC reaching 7 TeV.
To keep the protons along the ring, 1232 superconducting dipole magnets are installed. They
are cooled down to 1.9 K by liquid helium and provide a magnetic field up to 8.3 T. Additionally,
392 quadrupole magnets keep the beams focused.
Lead ions stemming from an electron cyclotron resonance source are bunched and accelerated by
a radio frequency quadrupole. They are selected in the charge state Pb27+ and further accelerated
in the linear accelerator LINAC3 to 4.2 MeV/nucleon. After that, they are stripped by a carbon
foil and the charge state Pb54+ is selected in a filter line. These selected ions are further accelerated
in the low energy ion ring (LEIR) to an energy of 72 MeV/nucleon. From there the ions are
transferred to the PS where they are accelerated to 5.9 GeV/nucleon and sent to the SPS. In
between they pass another foil which fully strips the ions to Pb82+ . The SPS accelerates the
fully stripped ions to 177 GeV/nucleon, before injecting them into the LHC where they reach a
maximum energy of 2.76 TeV/nucleon.
The particle beams are injected into the LHC clockwise and counterclockwise. Both beams
collide at eight interaction points. Four of these eight interaction points are equipped with the
main experiments, as indicated in Fig. 2.2. Three experiments (ATLAS, CMS, LHCb) mainly
profit from proton-proton collisions. ALICE was specifically designed for the purpose of heavy
ion collisions.
1. ATLAS:
The main goal of a toroidal LHC apparatus (ATLAS) experiment is the detection of the
Higgs-Boson and the search for physics beyond the standard model, e.g. supersymmetric
particles and extra dimensions.
1
4
Conseil Européen pour la Recherche Nucléaire
2.1 Accelerator Complex
CERN Accelerators
(not to scale)
CMS
LHC
No
rth
Ar
ea
COMPASS
SPS
T1
8
ALICE
T1
LHC-b
ATLAS
2
*
ISOLDE
TT10
AD
PS
3
LINAC
NA
LI
E0
2
C2
LHC: Large Hadron Collider
SPS: Super Proton Synchrotron
AD: Antiproton Decelerator
ISOLDE: Isotope Separator OnLine DEvice
PSB: Proton Synchrotron Booster
PS: Proton Synchrotron
LINAC: LINear ACcelerator
LEIR: Low Energy Ion Ring
CNGS: Cern Neutrinos to Gran Sasso
East Area
PSB
TT
protons
antiprotons
ions
neutrinos to Gran Sasso (I)
neutrinos
pbar
E1
*
CNGS
West Area
E2
LEIR
p
Pb ions
Gran Sasso (I)
730 km
Rudolf LEY, PS Division, CERN, 02.09.96
Revised and adapted by Antonella Del Rosso, ETT Div.,
in collaboration with B. Desforges, SL Div., and
D. Manglunki, PS Div. CERN, 23.05.01
Figure 2.1: Overview of the accelerator system at CERN. This figure has been taken from [27].
5
2 The Large Hadron Collider
Figure 2.2: Schematic view of the Large Hadron Collider and its four experiments ALICE, ATLAS, LHCb and
CMS. This figure has been taken from [28].
2. CMS:
The compact muon solenoid (CMS) is designed to analyze the nature of matter. In principle
the CMS and the ATLAS detectors are built for the same purpose applying different detector
technologies.
3. LHCb:
The LHC beauty (LHCb) experiment is built to observe CP violation in B-meson systems.
LHCb will help to understand why the universe appears to be composed almost entirely of
matter, but no antimatter.
4. ALICE:
A large ion collider experiment (ALICE) is the dedicated heavy ion detector at the LHC. The
ALICE detector is designed to identify and characterize the quark gluon plasma. ALICE is
described in more detail in Sect. 2.2.
2.2 The ALICE Experiment
ALICE determines the identity and precise trajectory of more than ten thousand charged particles
over a large momentum range from 100 MeV/c to 100 GeV/c transverse momentum [29]. An
overview of the single particle identification and momentum range of the various subdetectors
in ALICE is given in Fig. 2.3. These subdetectors are arranged in cylindrical shells around
the interaction point [30], shown in Fig. 2.4. The ALICE central barrel covers the kinematic
region around mid-rapidity and is surrounded by the L3-magnet. The L3-magnet produces a
homogeneous magnetic field of up to 0.5 Tesla parallel to the beam axis. This magnetic field
provides momentum dispersion for charged particles. The subdetectors inside the L3-magnet and
their main tasks are described below.
6
2.2 The ALICE Experiment
π/K
TPC + ITS
(dE/dx)
K/p
e /π
π/K
e /π
TOF
K/p
π/K
HMPID
K/p
(RICH)
0
TRD
PHOS
1
2
3
4
5 p (GeV/c)
e /π
γ /π0
MUON SPECTROMETER
1
10
100 p (GeV/c)
Figure 2.3: The single particle identification and momentum range of the different subdetectors in ALICE.
1
2
3
Figure 2.4: Schematic overview of the ALICE detector. The central barrel consists of: ITS,
FMD,
TPC,
4
5
6
7
8
TRD,
TOF,
HMPID,
PHOS
and is surrounded by the L3
Magnet. The muon arm is composed of
9
10
11
12
13
the numbers 9 to 13: Absorber,
Tracking
Chambers, Muon
Filter, Trigger
Chambers and the Dipole
14
15 Compensator Magnet. This figure has been
Magnet. Furthermore the overview includes the PMD
and the taken from [31].
7
2 The Large Hadron Collider
1. Inner Tracking System:
The collision point is surrounded by the inner tracking system (ITS). The ITS is composed
of six cylindrical layers of silicon detectors located at radii between 4 cm and 44 cm from the
interaction point. The two inner layers are silicon pixel detectors providing highest spatial
resolution of roughly 12 µm, followed by two layers of silicon strip detectors. The two outer
layers are silicon drift detectors. The ITS provides secondary vertexing capabilities, e.g for
the identification of D- and B-mesons.
2. Time Projection Chamber:
The time projection chamber (TPC) is the heart of the ALICE detector and the main
tracking device. The TPC provides particle identification, vertex determination and charged
particle momentum measurements with two-track separation [29]. The TPC is cylindrical
in shape. It incorporates a large field cage filled with gas (Ne/CO2 ). The active volume
ranges from an inner radius of 85 cm to an outer radius of 250 cm and a total length of
about 500 cm. Charged particles traverse the active volume and ionize the gas. The freed
electrons drift along the electric field lines to the cathode pads at the end plates and induce
a signal which is further processed by the front-end-electronics. The TPC provides up to
160 three-dimensional space points along a charged particle trajectory.
3. Transition Radiation Detector:
The transition radiation detector identifies electrons in excess of pT = 1 Gev/c and provides
fast trigger capability of 6 µs. More details of the TRD are described in Chap. 3.
4. Time Of Flight:
The time of flight (TOF) detector is the most outer part of the ALICE tracking chain and
identifies particles in the region where ITS and TPC are no longer sufficient by measuring
the time of flight from the interaction point to a radial distance of approximately 4 m. TOF
is composed of 18 supermodules surrounding the 18 TRD supermodules. The TOF detector
is composed of multigap resistive plate chambers.
5. High Momentum Particle Identification Detector:
The High Momentum Particle Identification Detector (HMPID) is dedicated to inclusive
measurements of identified hadrons at pT > 1 GeV/c [29]. The HMPID is based on the detection method of ring imaging Cherenkov counters (RICH). Cherenkov radiation is emitted
by a particle traveling faster than the speed of light through the medium. The HMPID radiator is filled with liquid perfluorohexane (C6 F14 ). Multiwire chambers detect the Cherenkov
light produced in the radiator through pads covered by CsI, a photosensitive material. The
multiwire chambers also detect the particle which produced the Cherenkov light.
6. Photon Spectrometer:
The photon spectrometer (PHOS) is a high resolution electromagnetic spectrometer which
provides energy measurement and identification of photons. Neutral mesons, e.g. π 0 and
η, are identified in the two-photon decay channel through their invariant mass. PHOS
is divided in five independent units positioned at the bottom of ALICE at a distance of
4.6 m from the interaction point. In total, PHOS consists of 17920 lead-tungstate crystals
(PbWO4 ) to identify photons and performs momentum measurements over a wide dynamic
range with high energy and spatial resolution [32].
The muon arm is located outside the L3-magnet and thus not part of the central barrel. It
covers the kinematic region at forward rapidity |2.5| < η < |4.0| . It identifies J/ψ, ψ 0 , Υ and
Υ0 through their decay into muons (µ+ , µ− ). A big front absorber composed of several materials
absorbs most of the hadrons and the photons. After penetrating the absorber charged particles
8
2.2 The ALICE Experiment
are separated in the magnetic field of a dipole. The muon tracking chambers (cathode strip
chambers) are surrounded by the dipole magnet. The muons further pass a filter (iron wall)
which absorbs the low energy muons and background. Behind the filter the muon arm trigger
chambers are placed.
The detectors described above are the main subdetectors of ALICE. More details can be found
in the ALICE technical design report [29] and the ALICE performance report [33].
2.2.1 The ALICE Online System
The ALICE online system ensures a safe and correct operation of the ALICE experiment and its
equipment by providing remote control and monitoring. The ALICE online system consists of
four parts:
• The detector control system (DCS).
• The data acquisition system (DAQ).
• The trigger system (TRG).
• The high level trigger system (HLT).
ALICE Control System
Electricity
LHC
ECS
DAQ
DAQ
control
Ventilation
Cooling
Gas
TRG
Trigger
Control
DCS
Magnets
Safety Systems
HLT
HLT
Control
Access Control
Sub-detector
equipment
Figure 2.5: Schematic overview of the ALICE control system. This figure is adapted from [29].
Theses four parts interface with each other through a control layer, the experiment control system
(ECS). The ECS synchronizes between the various systems (DCS, DAQ, TRG, HLT) and therefore
interfaces to the LHC accelerator to obtain operational information (e.g. states). The ALICE
control system is a collaboration between the individual subdetector groups and the ALICE
9
2 The Large Hadron Collider
control coordination (ACC). The subdetector groups establish their own detector control systems,
see Chap. 5, based on the concept of finite state machines. A detailed description of finite state
machines follows in Sect. 5.1. Each entity of a subdetector, i.e. electricity, ventilation, cooling,
gas, access control, magnets and other subdetector equipment, as shown in Fig. 2.5, is modeled
as a finite state machine with defined states and actions. The ECS and all other systems (LHC,
DAQ, TRG, HLT) are also based on the concept of finite state machines. Hence the interface
to the various systems is based on the exchange of states and actions between the relevant finite
state machines.
A well designed and thus efficient control system reduces the downtime of the experiment and
therefore contributes to a high running efficiency with positive impact on the quality of the physics
data [29].
10
3 The Transition Radiation Detector
The transition radiation detector (TRD) identifies electrons in the central barrel with momenta
above 1 GeV/c by using their transition radiation emitted when crossing the boundary between
materials with different dielectric constants. Furthermore the TRD provides fast (6 µs) triggering
capability for high transverse momentum (pT > 3 GeV/c) charged particles.
A comprehensive summary of the design, performance and construction of the ALICE transition
radiation detector can be found in the technical design report of the TRD [34].
In this chapter some basic facts about the TRD are given along with some newly developed
devices and changes since the submission of the technical design report.
3.1 Detector Design
The TRD fills the space between the time projection chamber (TPC) and the time of flight (TOF)
detector in the radial range from 2.9 m to 3.7 m in the ALICE spaceframe with an overall length
of 7 m. It consists of 540 gas detector modules arranged in 18 supermodules mounted in radial
direction, see Fig. 3.1. Each supermodule is divided in 6 layers in radial direction and 5 stacks in
beam direction. Hence one supermodule consists of 30 detector modules.
Figure 3.1: Schematic drawing of the ALICE spaceframe for the ITS, TPC, TRD and TOF cut in half. The TRD
consists of 6 layers in radial direction and 5 stacks in beam direction displayed in the colors red, green and yellow.
Transition radiation (TR) is produced by ultrarelativistic particles crossing the border between
materials with different dielectric constants. In the momentum range from 1 GeV/c to 10 GeV/c
only electrons produce transition radiation. Due to the low production probability for a transition
radiation photon of approximately 1% per boundary crossing, several hundred interfaces are used
in the TRD. The number of interfaces is limited due to saturation and interference effects. In the
TRD detector a sandwich radiator with a thickness of 4.8 cm made of Rohacell and polyethylene
fibers is used. A radiator of about 100 boundaries produces approximately one transition radiation
photon in the sensitive range of soft X-rays (1 to 30 keV).
As shown in Fig. 3.2 the sandwich radiator is part of each of the 540 modules along with
a multiwire proportional chamber, filled with Xe(85%)CO2 (15%) in the drift region, and its
11
3 The Transition Radiation Detector
electronics. The multiwire proportional chamber includes the drift region and the amplification
region. The drift region has a width of 3 cm and the amplification region as another part of the
module 0.7 cm. A particle traversing a TRD module creates transition radiation when it passes
the radiator depending on its Lorentz factor γ. The particles enter the drift chamber together
with the produced transition radiation photon. Both, charged particle and associated photon
ionize the gas in the chamber and create electron clusters. The transition radiation photon is
absorbed shortly after entering the drift chamber due to the efficient transition radiation photon
absorption provided by the chosen gas mixture. The primary particle constantly produces a track
of electron clusters on its way through the chamber. These electrons drift toward the amplification
region where they are accelerated and further collide with gas atoms, thus producing avalanches
of electrons around the anode wires. In Fig. 3.2 an example for the tracks assigned to pions and
electrons are shown. The large cluster at the beginning of the drift chamber produced from the
transition radiation photon is specific to electrons and hence used to identify them from the large
pion background. Figure 3.3 shows the average pulse shape versus the drift time for electrons
cathode pads
pion
electron
electron
cathode pads
3
amplification
region
anode
wires
4
5
anode
wires
amplification
region
cathode
wires
cathode
wires
Drift
Chamber
Signal
100
75
50
25
drift
region
drift
region
0
2
bin
Time
4
Drift
Chamber
primary
clusters
6
8
10
12
14
entrance
window
1
x
Radiator
z
x
2
3
4
5
6
7
8
er
Pad numb
Radiator
y
pion
TR photon
electron
electron
Figure 3.2: The principle of the ALICE TRD. The left figure shows the projection in the plane perpendicular
to the wires. Electrons produced by ionization energy loss (dE/dx) and by transition radiation absorption drift
along the field lines toward the amplification region where they produce avalanches around the anode wires. These
avalanches induce a signal on the cathode pads. The right figure shows the projection in the bending plane of
the ALICE magnetic field. In this direction the cathode plane is segmented into the pads from 0.635 to 0.785 cm
width. The insert shows the distribution of pulse height over pads and time bins spanning the drift region for a
measured electron track. The local coordinate system shown is the coordinate frame of a single readout chamber.
The z-direction is parallel to the beam axis, y is parallel to the anode wires and follows the rφ direction of the
detector. The x-axis is along the drift region. This figure has been taken from [34].
and pions. Electrons and pions have different pulse heights due to the different ionization energy
loss. A characteristic peak at larger drift times of the electrons is due to the absorbed transition
radiation.
The produced electrons with energy loss due to ionization dE/dx and transition radiation
absorption induce signals on the cathode pads. To detect produced electrons a module has 144
pads in direction of the amplification wires (rφ-direction) and either 12 or 16 pad rows in zdirection. The pads have a typical area of 6 − 7 cm2 and cover a total active area of about
736 m2 with approximately 1.2 millions readout channels [34]. The readout electronics of the 1.2
million channels is mounted on the back of the module. The signals are read out at 10 MHz
sampling rate such that the signal height on all pads is sampled in time bins of 100 ns. Thus the
readout data from the TRD is characterized by four coordinates: module, pad row, pad column
and time bin. In the drift region a time bin corresponds to a space interval of 1.5 mm in drift
12
3.1 Detector Design
Average pulse height [mV]
120
e, dE/dx+TR
e, dE/dx
π, dE/dx
100
80
60
40
p = 2 GeV/c
20
0
0
0.5
1
1.5
2
2.5
3
Drift time [ms]
Figure 3.3: Average pulse height versus drift time for electrons (upper and middle) and pions (lower). The different
pulse heights indicate the different ionization energy (dE/dx) loss of electrons (green rectangles) and pions (blue
triangles). The characteristic peak at larger drift times of the electron (red circles) is due to the absorbed transition
radiation. This figure has been taken from [34].
direction according to an average drift velocity of 1.5 cm/µs.
The electronics process the signals collected by the readout channels before the data is sent
out over an optical link. The electronics of the TRD are based on the multi chip module (MCM)
which consists of two chips, see Fig. 3.4. An 18 channel analog preamplifier and shaper (PASA)
provides the read out detector signal in shaped and amplified form to the second chip, the tracklet
processor (TRAP). The TRAP chip is a mixed-signal ASIC with digitization, event buffering, and
detector
6 Layers
1.181.952
channels
TRD
charge sensitive
preamplifier/
shaper:
18 channels
10 bit,
10 MSPS,
21 channels
digital filter:
pedestal-, gain-,
nonlinearity-,
tail-correction
Hit detection,
hit selection,
tracklet fits,
4 arithmetic units
select high pT
tracklets,
compress RAW
data, 4 CPUs
network
interface:
ship tracklets
and RAW data
PASA
ADC
Digital
Filter
Tracklet
Preprocessor
Tracklet
Processor
Network
Interface
MCM
TRAP chip
merge tracklets
into tracks,
send L1 Accept,
ship RAW Data
(at L2 Accept)
GTU
Event Buffer
store RAW data until L1 Accept
Figure 3.4: Schematic overview of the TRD electronics. This figure has been taken from [34].
local tracking functions consisting of ADCs, digital filters, tracking processor and CPUs where
digital filters consists of filter stages performing non linearity, baseline and gain corrections, as well
as signal symmetrization and crosstalk suppression [35]. The preprocessor contains hit detection
and selection, calculates the position using the pad response and detects tracklets. The tracklet
processor identifies high pT track candidates for further processing [36]. The different steps in the
readout electronics are necessary to reduce the data size for the trigger decision, i.e. to determine
13
3 The Transition Radiation Detector
potential tracklets. The determined tracklets are sent to the global tracking unit (GTU), situated
outside of the detector, over an optical link. The GTU receives the trigger decision from individual
readout chambers, combines them and comes to a global trigger decision.
16 MCMs for digitization are arranged on 1 readout board (ROB). Each readout chamber
(ROC) has either 6 or 8 ROBs. The MCMs have to be ready for data collecting immediately
after the collision. Therefore a ”MCM wakeup trigger”, the pretrigger, is implemented [37]. The
pretrigger changes the TRAP chip state from waiting to signal processing mode.
3.2 The low voltage system
The low voltage system of the TRD consists of 89 watercooled Wiener PL512/M power supplies [38], see Fig. 3.5. This large number of power supplies indicates that the low voltage structure
of the TRD is complex. These 89 power supplies provide the low voltage for the detector components such as readout boards (ROBs), pretrigger system, global tracking unit (GTU), power
control unit (PCU) and the power distribution box (PDB). In total the TRD low voltage system
consists of 224 individual channels, their distribution along with the distribution of the power
supplies is listed in Tab. 3.1.
Figure 3.5: A Wiener Power Supply mounted in a crate at the lab at Heidelberg. The two blue tubes provide
water cooling. The gray Ethernet cable keeps it under remote surveillance. The orange cable provides 220V to the
power supply.
System
Supermodule
Layer pairs
PCU
PDB
Pretrigger
GTU
total
Power Supplies
10
72
3
5
4
3
89
Channels
18
162
3
9
14
18
224
applied voltages
4V
2.5 V,4 V
4V
4V
4 V,12 V
7V,12V
Table 3.1: Distribution of the Power Supplies and their channels for the TRD low voltage system. Some power
supplies provide voltage for different subsystems, e.g. PCU and GTU for optimal use of the channels. The 224
individual channels are provided by 89 Wiener PL512/M power supplies.
14
3.3 The DCS Low Voltags System
Figure 3.6: Backpanel of a Wiener power supply. In total there are 8 available channels with two are used in the
test setup at the Physikalisches Institut in Heidelberg. One channel is for the power control unit (left) and one for
the power distribution box (right). The cables are marked with blue for ground and red for power.
3.3 The DCS Low Voltags System
The front-end-electronics (FEE) is controlled by a detector control system (DCS) board mounted
on one of the readout boards in the readout chamber. This DCS board checks the electronics
during operation. Additionally the DCS board controls the power cycle of the TRD by controlling
the voltage regulators on the readout boards and is responsible for the configuration of the readout
chambers. The trigger and clock signals are also provided by the DCS board. Without an
operational DCS board a readout chamber is not functional. The DCS board is connected to a
higher control system via Ethernet.
For the operation of the electronics four low voltages and the corresponding grounds are needed:
3.3V digital for the TRAP
1.8V digital also for the TRAP
3.3V analog for the PASA
1.8V analog for the ADCs
In addition, high voltage with a potential of -2.1 kV to generate the drift field and high voltage
with a potential of +1.7 kV for a sufficient gas gain is provided.
The low voltage for the electronics of the readout chambers is provided via long copper power
bus bars mounted on the sidewalls of the supermodule. This voltage is generated by the Wiener
PL512/M power supplies. An overview of the DCS board power supply system consisting of the
power control unit, power distribution box and the power distribution control boards (PDC) is
shown in Fig. 3.7. A closer look at the components of the DCS board power supply system is given
in the following sections. The power for the DCS board comes from the power distribution box.
The power distribution box delivers around 4V to the DCS board and the voltage regulators on the
DCS board produce 3.3.V and 1.8V for the components in the DCS board. Each supermodule has
one power distribution box installed, hence 30 DCS boards are controlled by one power distribution
box and a total number of 18 power distribution boxes is used for the TRD. To control the power
distribution boxes respectively the DCS boards a connection from the power distribution box to
the power control unit is established. The power control unit is situated outside the supermodule
and controls DCS board power of nine supermodules, i.e. nine power distribution boxes. Each
power distribution box hosts two power distribution control boards to do the logic of the power
distribution box and 30 output channels, one for each of the 30 DCS boards.
15
3 The Transition Radiation Detector
Channel
0
Channel 0
Channel 0
Channel
1
Channel
Channel 0 2
Channel 0
Channel 3
Channel 4
Channel 0
Channel 5
Channel 6
Channel 0
Channel 7
Channel 0
Channel 8
8 pin
Channel
0
input channel
5 DCS boards
for each layer
PDB
cable for
Channel
0
data
transmission
5 DCS boards
for each layer
Channel 0
Channel 0
Channel 1
Channel 2
Channel 3
Channel 4
Channel 5
Channel 6
Channel 7
Channel 8
input channel
5 DCS boards
for each layer
Copper bus bars
PDB
Worker Node
Inside the supermodule
Figure 3.7: Schematic overview of the DCS board power supply system. This system consists of power control
units (PCUs), power distribution boxes (PDBs) and power distribution control boards (PDCs).
3.3.1 The Power Control Unit
The power control unit (PCU) is the interface between the detector control system and the two
redundant low level power distribution control boards located in the power distribution box. Each
PCU controls nine power distribution boxes, i.e. the DCS board power of nine supermodules.
Thus one PCU controls 270 DCS boards. Hence to control the 540 DCS boards of the TRD two
PCUs are sufficient, but the proper functionality of the PCUs is essential for a stable operation
of TRD. Hence for failsafe operation two additional PCUs are used in parallel, i.e. four PCUs
control the DCS board power of full TRD. As shown in Fig. 3.8, the four PCUs are grouped in
two redundant sets.
• PCU00 and PCU02 control supermodule sectors 05-13.
• PCU01 and PCU03 control supermodule sectors 00-04 and 14-17.
Due to the complex DCS board power supply system in some cases a power cycle is required to
maintain the proper functionality. Therefore the power scheme shown in Fig. 3.9 is set up for the
PCU rack. This setup ensures a still functional DCS board power control in case of a broken PCU
or power supply. To maintain the power supply of one PCU of each redundant set, i.e. two PCUs,
all four PCUs are powered by three different low voltage channels provided by three independent
Wiener PL512/M power supplies. As shown in Fig. 3.9 each PCU is powered by two independent
low voltage power channels. The two power inputs are equipped with Shottky diodes. In case of
a faulty power supply the Shottky diodes protect the remaining power channel. The Zener diode
suppresses voltage spikes from the power supply to protect the PCUs, e.g. during a power cycle.
Furthermore the power supplies are protected by 5A chip type fuses. These fuses break in case of
a short on a PCU resulting in a high current. Hence the broken PCU is cut from the power supply
with the remaining PCU still powered. A power cycle of one PCU requires to switch off both of
16
3.3 The DCS Low Voltags System
05 04 03
02
06
07
01
08
00
09
17
10
16
11
12
15
13 14
PCU 00 and PCU 02
PCU 01 and PCU 03
Figure 3.8: Schematic drawing of the TRD and its supermodule numbering scheme from sector 00 to 17. The
TRD is divided in two parts and for each part one redundant set of two PCUs is installed. Each PCU set controls
nine supermodules. This figure has been taken from [39].
Redundant pair
PCU00
PCU01
Redundant pair
PCU02
alidcswie90
Channel A
PCU03
alidcswie91
Channel B
alidcswie92
Channel C
Figure 3.9: Power scheme of the four PCUs. The aliedcswie9x are the name of the Wiener power supplies in the
TCP/IP network with the power channels A, B and C connected to the PCUs.
its input channels (channel A and channel C or channel B and channel C). The other redundant
set is still powered by the third power supply channel (channel A for set PCU00, PCU01 and
channel B for set PCU02, PCU03). In case one redundant set requires a power cycle the other
redundant set keeps the DCS board power control for all 18 supermodules alive. Table 3.2 lists
the channels (first, second and third column) which are switched off to power cycle the PCU listed
in the fourth column.
The four PCUs are composed of a hostboard with an attached DCS board and a front panel.
1. The Hostboard
The hostboard acts as a service unit which ensures the power supply and the mechanical
17
3 The Transition Radiation Detector
Channel A
off
off
on
on
Channel B
on
on
off
off
Channel C
off
off
off
off
power cycled PCU
00
01
02
03
DCS board power control
functional
functional
functional
functional
Table 3.2: Defined channels to maintain the low voltage for the DCS power control.
stability. Therefore it is equipped with the necessary infrastructure to operate the attached
DCS board which is mounted as a mezzanine board on two HARWIN M50-3603522 connectors. The hostboard hosts nine RJ45 jacks for the serial connection to the power distribution
control boards (PDCs) and one for the Ethernet connection to the high level control system.
The RJ45 have two integrated light emitting diodes (LEDs). The orange LED indicates an
error, the green LED indicates the activity of the channel.
2. The Front Panel
The front panel, as shown in Fig. 3.10, was designed to fit the PCU into the crate in
the ALICE cavern. The front panel is the only visible part after insertion into the crate.
The front panel assigns 9 channels according to the engraved numbers. These are for the
interface, the serial connection, to the power distribution box respectively to the power
distribution control board. These channels are numbered from CH-0 to CH-8 and each
channel corresponds to one supermodule.
Figure 3.10: The PCU front panel mounted in a 19” rack. Its height is 6 HU. The front panel is made of anodized
aluminum with engraved captions.
The tenth connector is for the Ethernet to control the PCU using the higher level control
system, hence to receive commands and return data from and to the high level control
system. The timeout LED is lit in red in case a timeout occurs and the power LED in green
in case power is on.
3. The DCS Board
Figure 3.11 shows a reduced version of a DCS board as used to control the logic of the PCU.
18
3.3 The DCS Low Voltags System
Ethernet
connector
Power connector
Figure 3.11: A DCS board as mounted on the hostboard. The DCS board has a width of 13.8 cm and a depth of
8.9 cm.
This DCS board has no clock distribution and receiving function. Hence it is of different
kind than those on the readout chambers. The DCS boards were developed at the Kirchoff
Physikalisches Institut in Heidelberg in cooperation with the Fachhochschule Köln [40].
The DCS board hosts all logic for the PCU. The main component of the DCS board is an
ALTERA excalibur device. The ALTERA excalibur device is based on an ARM922T core
which is connected to a field programmable gate array (FPGA). The combination of these
two components allows for the implementation of an embedded linux system as operating
system with flexible I/O interfaces. The embedded linux system, i.e. the firmware, controls
the data transmission units implemented in the hardware of the DCS board.
All user interaction of the PCU is handled via the Ethernet connection to the DCS board.
The hostboard connects the DCS board to the input channel. An overview of the software
structure for processing the user input is shown in Fig. 3.12. The user input on the software
level is processed under linux using either the command line application sw or the distributed
information management (DIM) server. The command line application sw as well as the DIM
server access the hardware using the linux device driver and the libsw library [25]. The linux
device driver is the lowest software layer and enables the access to the hardware unit in the FPGA
based on standard read and write commands. The libsw library provides the functions and
routines to communicate with the underlying hardware. This leads a to three domain technical
system of the PCU. First the software domain based on an embedded linux system. Second, the
FPGA as the flexible hardware domain and third the fixed hardware domain, i.e. the hostboard.
The FPGA stores the input data in input registers. The data is further distributed to the output
registers. The parallel data is serialized using a parallel to serial shift register. The data stored in
the output registers is propagated over the RJ45 jack to the PDB using a data transmission based
on a serial protocol including clock, strobe, data lines and feedback lines. The pin assignment of
the 8 pin cable used for the serial connection between the PCU and PDB is shown in Tab. 3.3.
The data sent over the data line of the serial connection is synchronized by the clock and strobe
signal. The data contains the state of every PDB output channel and is sent in one frame to the
19
3 The Transition Radiation Detector
PCU/DCS board
data frame
sw
d
an
Linux device driver
libsw
e
lu
va
ad
re
readvalue, stausword
DIM
Server
readvalue, statusword
m
co
m
read and write commands
clie
nt
PVSSII
I/O registers 9x
PDB
2x
FPGA
Shift register
Toggle register
feedabck,readvalue (32bit)
published
PDC
1x
FPGA
statusword
clock, strobe, data
Ss
h
readvalue, statusword
Remote
user
command
Worker node
clock, strobe, data
RJ45
2x
feedabck,readvalue (32bit)
control signal
FET 30x
DCS board 30x
Figure 3.12: Software structure on the DCS board of the PCU. The PVSSII part was developed within this thesis
and is explained in Chap. 6.
pin
1
2
3
4
5
6
7
8
connection line
clock
ground
strobe
feedback
not used
data
ground
not used
function
transmission clock
delimits data frames
data returned by the PDC
data signal sent in 32 bits by the PCU
-
Table 3.3: The pin assignment of the PCU-PDC cable connection.
20
3.3 The DCS Low Voltags System
PDC input register, see Sect. 3.3.3. The input registers are operated with the clock of the serial
connection. To control the 30 DCS boards a frame width of at least 30 bits is required, a bit of
32 is implemented to control the PDC in debug mode. Hence to control the nine PDCs connected
to the PCU, nine data frames are stored in the output register of the FPGA and transmitted as
shown in Tab. 3.4. These first nine register are accessible by read and write commands. A read
command returns the actual value stored in the register. A write command changes the actual
data stored in the registers, e.g. a new command sent by the sw application. Register 9 returns
the firmware version of the DCS board upon a read request. The registers 11, 12, 14, 15 are used
Register
0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
Meaning
data of channel 0
data of channel 1
data of channel 2
data of channel 3
data of channel 4
data of channel 5
data of channel 6
data of channel 7
data of channel 8
firmware version
the statusword
debug channel
valid register
clear timeout bit
option register
time register
Table 3.4: The output register of the PCU with their occupancies.
for debugging purposes and contain no data for the end user. Register 10 contains the statusword
of the PCU. The statusword is a 32 bit word and is used to indicate the proper functionality of
the channels of the PCU. Therefore the statusword is composed of the following data sets, see
Tab. 3.5. The first 8 bits indicate if the connection between the PCU and the PDB for every of
data set
bit 0-8
bit 9-17
bit18-26
bit30
description
connection flag for channel 0-8
active flag for channel 0-8
error flag for channel 0-8
PCU timeout flag
Table 3.5: Data sets in the statusword of the PCU and their meaning.
the nine channels is functional. Hence if the bit is set to one data frames can be sent to the power
distribution control board. If the bit is zero the connection is faulty. The bits from 9 to 17 are
one if the channel is active, i.e. if data is transmitted. The transmitted data is received from the
PDC and sent back to the PCU via the feedback line in the serial connection, see Tab. 3.3. The
PCU reads this data and compares it to the sent data. In case the sent and the read data are not
equal the error flag bit is set to one. Otherwise the bit is set to zero.
Bit 30 of the status word indicates if the timeout of the PCU is enabled. The timeout is disabled
through any data written to the register 13 of Tab. 3.4. The timeout mechanism was introduced
21
3 The Transition Radiation Detector
to ensure a functional DCS power supply system, see Sect. 3.3.3.
3.3.2 The Power Distribution Box
Each DCS board gets an input voltage of 4 V and a current of approximately 1 A. Hence each DCS
board consumes power up to a maximum of 4 W. Providing an individual power supply channel
would be an oversized and thus expensive solution. The power distribution box avoids this use
of an individual low voltage channel for each of the 540 DCS boards. The power distribution
box is placed inside the supermodule as shown in Fig. 3.7. A PDB with two redundant power
distribution control boards (PDCs) inside is shown in Fig. 3.13. The PDB distributes a total
DCS board
power cable
fixation board
Copper bars
Input buffer
capacitor
18mFx6
RJ45
Vcc GND
power distribution control board
Figure 3.13: Picture of a power distribution box (PDB). The power distribution control boards are responsible
for the logic. On top of the copper bars are 18mF buffer capacitors. At the 30 (15 are mounted on the bottom
side and therefore not visible) output channels, each equipped with a black 2mF capacitor. The DCS board power
cables are fixed with a cramp on the fixation board. The power distribution box has a height of one height unit
(HU), the width is around 43.65 cm and the depth around 21.9 cm.
current of 30A to the independent 30 output channels. The individually manageability is ensured
by a solid state switch based on a field effect transistor (FET). The power distribution control
board is the control logic of the PDB. It is implemented twice due to the importance of a functional
PDB. The two power distribution control boards are operating in parallel. Hence the FETs are
controlled by two signals, one coming from each power distribution control board. The parallel
operation is explained in more detail in Sect. 3.3.3.
The main current rails to the PDB are two thick copper bars with the ground on the right and
the positive power voltage (Vcc ) on the left. The buffer capacitance of 18mF was inserted to act
as buffer for sudden load changes. This avoids spikes of high currents, e.g. when switching the
power of DCS boards. In addition the software invokes a slow start when more than four output
channels are switched on at once. Furthermore each channel has an additional buffer capacitance
of 2mF.
At the back of the PDB a DCS power cable fixation board is mounted for 30 DCS board power
cables. On this board each cable is fixed with a cramp.
22
3.3 The DCS Low Voltags System
At the front of each PDB there are two RJ45 jacks for the serial connection between the
power distribution control boards and the corresponding PCU. The LED consisting of four parts
indicates the proper functionality of the serial connection based on the clock, strobe, data and
feedback line. This is described in detail in Chap. 4.2.
3.3.3 The Power Distribution Control Board
PDB-PDC connection
ACTEL FPGA
Figure 3.14: A power distribution control board mounted in the power distribution box. The power distribution
control board has a width of 11.4 cm and a depth of 8.5 cm.
The power distribution control board (PDC) is located in the power distribution box and is
responsible for the logic of the power distribution box. The main task is the conversion of the data
sent by the power control unit over the 8 pin cable connection to the control signals for the 30
PDB output channels. The control signals sent from the PCU over the 8 pin cable terminate inside
the power distribution box in an RJ45 jack. The interface between PDB and PDC is established
through a 10 pin connector sitting on the PDC and the 8 necessary pins soldered directly on the
PDB, see Tab. 3.3. The pin assignment from the PDB to the PDC is shown in Tab. 3.6. This
pin at PDC
1
2
3
4
5
6
7
8
9
10
pin soldered on PDB
1
6
5
2
–
2
4
–
–
–
function
clock
data
strobe
ground
not used
ground
feedback
spare
spare
spare
Table 3.6: The PDC-PDB pin assignment.
interface routes the signals to the input of the main part of the PDC, the ACTEL FPGA. The
input register is a serial to parallel shift register. The serial data is converted to parallel data and
23
3 The Transition Radiation Detector
the output signals to control the 30 output channels are generated. The parallel data is put to
the toggle register. The toggle register buffers the data and toggles its output to the 30 channels
if a logical high is present, i.e if the bit of the corresponding channel is one. The two PDCs in
the PDB are operating in parallel coupled through a logical OR. It is ensured that a faulty PDC
does not affect the proper functionality of the redundant unit.
To avoid the loss of control by a logical high or low sent by the PCU, the timeout mechanism
was implemented.
The timeout mechanism
Due to the logic OR coupling of two redundant PCU channels in the PDB (two PDCs), the channel
sending a logical high determines the state of the PDB channel. A PCU which lost contact to the
detector control system might send high on all channels. That will prevent the redundant PCU
from switching off a channel.
The timeout mechanism consists of a programmable timer controlled by a special timer register.
This user programmable timeout register has width of 16 bits. The granularity of the timer is
1.6ms. Thus the maximal timeout is 216 × 1.6ms ≈ 104s. This timeout register is refreshed by
any valid read or write operation on the hardware. A timeout event is generated if the timer is not
refreshed within the time period set by the user. If a timeout event occurs, all PCU data channels
are set to zero. The PCU which lost contact does not send a logical high to all channels anymore.
Then the redundant PCU respectively the PDC has full control over the 30 output channels of
the PDB. If the timeout register is enabled, bit 30 in the status word is set to one, as shown in
Tab. 3.5. The timeout register is enabled by sending the command: timeout,<seconds> to the
PCU over the sw command line or the DIM server.
All control commands from the high level control system are sent using the application software
sw or a demon software using the distributed management information (DIM) system. The sw
application is used only in the interactive shell, thus difficult to include in a higher control system
as described in Chap. 5. That implies that the DIM server handles all user commands from the
higher control system. This is described in Chap. 6.
24
4 Production of the Power Distribution Boxes
The power control units are situated in the cavern outside the L3-magnet. This area is not
accessible for maintenance while the beam is on. Hence the DCS board power supply system has
to be very reliable. To ensure this some minor hardware improvements on the system were made.
A teststand for the power distribution boxes was setup to ensure the proper functionality of the
PDB before its installation into the supermodule. After the installation the PDB is not accessible
for the duration of the experiment.
4.1 Hardware Improvements
The first version of the PDB was installed in the first supermodule of the ALICE TRD detector.
The first supermodule was installed in the ALICE TRD detector at CERN in September of
2006 [25]. Afterward, to further improve mechanical stability some minor hardware changes were
applied.
Power Control Unit
As mentioned in Chap. 3 the PCU consists of a hoastboard and an attached DCS board. The
hostboard with an attached DCS board is shown in Fig. 4.1. The changes are the replacement of
Power input on
hoastboard
New 6mm Connectors
Cable for the power
and timeout LEDs
Ethernet connection from in
input channel to DCS board
Figure 4.1: A hostboard of a PCU with an attached DCS board.
the connectors between the DCS board and the hostboard from 3 mm height SMD HARWIN to
6 mm height Narwan SMD-S127.10-6,8-25-70-S1-0 connectors. The Ethernet connection between
25
4 Production of the Power Distribution Boxes
the DCS and the hostboard is glued to the 6 pin connectors at both sides. The cables for the two
LEDs are glued to the hostboard. Additionally heat shrinking tubes were placed on the two LED
cables. Finally the DCS board was fixed by plastic screws to the hostboard. These changes ensure
the mechanical stability, especially in the strong air flow of the rack cooling. Similar changes were
applied to the power distribution box.
Power Distribution Box
To avoid loose contacts the height of the connectors between the PDC and the PDB platine was
changed from 3 mm through-hole connectors to 6 mm through-hole Narwan S127.30-10,3-25-70-S5
connectors and the PDCs were screwed to the PDB platine by using non-magnetic plastic bolts.
A modified version v4 of the PDC was developed at the Kirchoff Institute of Physics. The
version v4 of the PDC ensures that in case of a missing strobe or data signal in the serial connection
between PCU and PDB/PDC, all PDB output channels are set to zero. This implies that the
DCS board power is switched off. The PDC version v4 was tested and no errors occurred during
long term tests. In total, 44 (38+6 spares) were produced by the MSC company. With the PDC
version v4, all operations worked fine and no errors occurred so far.
The DCS board power cable fixation board was changed from a plastic board using cable ties
to a metal board using cramps to fix the power cables. The power cables are fixed tighter and
the fixation is easier to manage.
With these changes applied the DCS power supply system was tested at the lab in Heidelberg.
After successfully completing a PDB with the PDC version v4, 19 PDBs were produced and
assembled with the PDCs at the electronics workshop of the Physikalisches Institut in Heidelberg.
Later on one additional PCU was built that adds up to a total number of 19 PDBs with two
PDCs each and 6 PCU modules. The changes for the PCU modules just started and they are
done sequential in order to keep two functional PCUs at CERN as well as one at the supermodule
construction site in Münster. The 19 PDBs have been tested in the lab in Heidelberg using the
tests described in the following section.
4.2 Test Procedure
To test the 19 boxes with the applied hardware changes a teststand in the lab at the Physikalisches
Institut was set up. Pictures from the teststand are shown in Fig. 4.2. The boxes are mounted on
the front of a wooden table with three screw clamps. The power cables from the 30 DCS boards
in the green rack are attached to the 30 output channels of the power distribution box. The 30
DCS boards are connected to two netgear switches which are included in the local network of the
lab. The low voltage power for single power control unit (PCU) and the power distribution box
(PDB) is provided by a Wiener PL512/M power supply. A complete procedure for testing the
power distribution box consisted of five individual tests.
Ping Test
The 30 output channels of the PDB were switched on, thus the attached 30 DCS boards were
supplied with power and started booting automatically. During the boot sequence each DCS
boards acquires a unique ip address from the dynamic host configuration protocol (DHCP) server.
These ip addresses are defined according to their hardware number. The DCS boards were then
accessible through the local network in the lab, e.g. by ping. A monitor program [41] periodically
pings all 30 DCS boards an displays their actual status (up or down), an example is shown in
26
4.2 Test Procedure
Figure 4.2: Pictures from the teststand in the lab in Heidelberg. The left picture shows an overall view of the
teststand. In the crate on the left the backpanel of the power supply with the attached power distribution box and
the power control unit is shown. The wooden table in the middle hosts the stand for the 30 DCS boards. A top
view of the 30 DCS boards is shown in the right picture. In this picture the blue cables are the Ethernet cables
attached to the Ethernet switches. The black cables are the power cables connected to the 30 output channels of
the PDB.
Fig. 4.3. The ping test was successful in case all 30 DCS boards are up, thus indicating they are
successfully powered.
Data transmission test
In the data transmission test three different patterns of 30 bit length were sent separately to
both power distribution control boards using the sw application pdbtest [25]. Each pattern sent
amounts to 30 data frames. In total these three patterns were sent 5000 times implying a data
volume of 450000 frames. Each data frame sent was compared to the received data frame. The
data transmission test was successful if every single frame pair matched, thus indicating a proper
functionality of all lines of the serial connection. An example for a typical output of the data
transmission test is given below:
1. Pattern
Data transmission test
Sent frames: 150000,
Received good frames: 150000
Received bad frames: 0
2. Pattern
Data transmission test
Sent frames: 150000,
Received good frames: 150000
Received bad frames: 0
3. Pattern
Data transmission test
Sent frames: 150000,
Received good frames: 150000
Received bad frames: 0
27
4 Production of the Power Distribution Boxes
Figure 4.3: Display of the DCS board monitor. This DCS board monitor pings the DCS boards to check
their power status. The display is ordered in layers from top to bottom and stack from left to right. The
supermodule, stack and layer number is visualized by the numbers after trd in each third line. The first
two numbers indicate the supermodule sector, the third is the layer and the fourth is the stack. The line
below identifies the DCS boards by its hardware number. This hardware number is stored in a database.
Longterm Test
The longterm test is a twelve hour continuous operation of the PCU/PDC system. The expiration
time for the timeout mechanism was set to 10 s. To prevent a timeout, an update command was
issued periodically every 5 s to refresh the timeout register. At the beginning all DCS boards were
powered. In case all 30 DCS boards were still powered after twelve hours, the test was completed
successfully. This indicated that the timeout register was regularly refreshed proving longterm
stability of the system.
Timeout Mechanism Test
Directly after successfully completing the longterm test no command was issued anymore to refresh
the timeout register. Thus a timeout event should occur powering off all 30 DCS boards. The
timeout mechanism test was successful if all DCS boards were down typically within the defined
expiration time.
Logic Test
In the logic test the single lines of the serial connection were interrupted using an additional
board between PCU and PDC with jumpers for the clock, strobe, data and feedback line, see
Fig. 4.4. The two ground lines were not included in this test since they do not influence any PDC
logic or data transmission. In case of an interrupted clock, data or strobe line the DCS boards
are no longer supplied with power since the serial connection does not work properly anymore
and the FPGA sets all 30 output channels to zero, hence the 30 DCS boards are switched off.
In case of an interrupted feedback line the DCS board stayed powered and their status was
identified by the DCS board monitor program, shown in Fig. 4.3. Additionally, interrupted
28
4.2 Test Procedure
Figure 4.4: Jumper board used for the logic test of the serial connection between PCU and PDC.
lines are indicated by the 4-LED display on the front side of the PDB, see Tab. 4.1. Each line
was broken individually. The logic test was successfully completed if for each broken line the
expected behavior was observed. An overview of the test results is given in App. B. Each line was
Part of the front LED
0
0
0
1
1
1
2
2
2
state
off
blinking
on
off
blinking
on
off
blinking
on
meaning
no clock and strobe is ignored
clock and strobe ok
clock ok and strobe bad
all output channels off
some output channels on
all output channels on
data is zero
data not constant
data is one
Table 4.1: The defined states of the 4-LED display of the PDB identifying broken lines in the serial connection
between PCU and PDC.
broken individually. The logic test was successfully completed if for each broken line the expected
behavior was observed. An overview of the test results is given in App. B.
29
5 The Detector Control System
The Detector Control System (DCS) of ALICE provides an environment for configuring, monitoring and controlling the experiment’s equipment. This includes hardware and software devices
with custom designed software (firmware) running on them. Communication to the hardware is
established through communication protocols over network (TCP/IP). The DCS architecture is
divided into three layers, as shown in Fig. 5.1.
Figure 5.1: Schematic architecture of the detector control system (DCS). This figure has been taken from [42].
1. Supervisory Level
The supervisory level consists of several PCs providing the graphical user interfaces to the
operator. The technologies to built the graphical user interfaces for semi-automatic control
are the supervisory controls and data acquisition system (SCADA) tool PVSSII and the
state management interface (SMI++) based finite state machine (FSM) tool.
2. Process Control Layer
The process control layer is the interface between the supervisory layer and the lower field
layer. The interface is established by several PCs and PLC devices. In the process control
layer information about monitoring or the status of the experiment’s equipment is collected.
The technologies which make this information available for the supervisory layer are the
distributed information management (DIM) and the OLE for Process control (OPC) among
other communication protocols.
3. Field Layer
The field layer includes the experiment’s equipment e.g. power supplies, sensors, DCS
boards, etc. and their specific software, e.g. the firmware of the DCS board.
30
5.1 Finite State Machine
The joint controls project (JCOP) framework developed at CERN provides components like
access control, hierarchical control (FSM), interfaces to hardware devices as well as rules and
guidelines, e.g. color codes and naming conventions, to ensure the homogeneity of the detector
control system.
5.1 Finite State Machine
Each component of the detector control system is modeled as a finite state machine (FSM)
with a set of defined states and actions for state transitions. A hierarchical, tree like structure,
following the arrangement of the components in the subdetectors, is implemented by creating
state management interface (SMI++) classes and objects. The objects are either physical or
abstract. Physical objects interface with physical devices, e.g power supplies or DCS boards.
Abstract objects are logically related and grouped inside SMI++ domains.
The finite state machine of each component is modeled using device units (DU) and control units
(CU).
• Control unit:
Control units (CU) monitor the states of their children and report an overall state to their
parents.
• Device unit:
Device units (DU) represent hardware components passing their actual state to a control
unit. Therefore the device unit maps between the hardware and the finite state machine
state.
As shown in Fig. 5.2 the control units and device units accept commands from graphical user
CU
C
O
M
M
A
N
D
S
CU
DU
S
T
A
T
E
S
CU
DU
Figure 5.2: Simple scheme of the command and state propagation in a finite state machine hierarchy. The control
unit (CU) is always the top entity but not directly related to the hardware. The device unit (DU) is always the
bottom node and interfaces directly to the hardware. This figure has been adapted from [29].
interface panels as well as from their parent control unit. At the lowest level, i.e. at the bottom
of a CU tree, the command arrives at the device unit and is passed to the hardware.
5.2 PVSS
PVSSII is the supervisory controls and data acquisition (SCADA) system adopted in ALICE
DCS. It is a commercial product developed by the Austrian company ETM.
31
5 The Detector Control System
In short, PVSSII consists of a run-time database and an editor for graphical user interface
building. The structure of the run-time database includes data points (DPs) of a defined data
point type (DPT). The data point type is defined according to the structure of the device. It
can be as complex as necessary following especially the data structure of the device. With this
defined data point type (DPT) structure data points are established. These data points adopt
the structure of the data point type, so many data points can be created with the definition of
one data point type (DPT). The data point indicates the structure but not the values read from
the device. These values are stored in so called data point elements (DPEs). The data point
elements (DPEs) are defined as boolean, float, integer or unsigned integer.
The graphical user interfaces are built using predefined widgets like buttons, textfields, etc.
These widgets are integrated in the graphical user interfaces, so called panels, by click and drop.
Each widget has event dependent scripts to control their dynamics. Scripts enforce an action on
the widgets by clicking or just when the panel is initialized. The scripts are written in the PVSSII internal control script language (CTRL). To write these scripts PVSSII provides predefined
functions like dpSet() or dpConnect(). These functions are integrated in panel global scripts
as well as in the widget scripts. In the global scripts functions or variables are defined to be
accessible for each widget, respectively its event dependent script. Since there are components to
be controlled and monitored which are of the same type PVSSII provides another tool, i.e. the
possibility to create reference panels. These reference panels are the object-oriented, graphical
equivalent to classes in C++. Like classes in C++ these reference panels define structures, thus
the layout of the graphical user interface. The instances are initialized at run time. The single
instances are individualized through the inheritance of additional information. In PVSSII this is
realized by passing so called dollar parameters. The dollar parameters are set in the scripts of the
widgets in the reference panel by using $ < parameter name >. Panels having the same layout
but with widgets connected to different data point elements are created from the same reference
panel by inheriting different dollar parameters for each panel. One of the major advantages of
the reference panels is that modifications in the reference panel are automatically adopted to all
panels made from those.
PVSSII applications are managed in units of projects. A project stores all information required
to built an application. These projects are started as distributed projects because then several
projects can be included as subprojects in a main project. They are connected to other systems
by using the distribution manager. This requires a highly distributed architecture composed of
several processes, so called managers. The different managers communicate via a PVSSII specific
protocol over TCP/IP [29]. An overview of the manager structure of PVSSII is shown in Fig. 5.3.
UIM
UIM
Ctrl
DM
UIM
API
D
Processing Layer
Communication and
Memory Layer
EV
D
User Interface Layer
D
Driver Layer
Figure 5.3: Schematic overview of the manager structure in a PVSSII system. This figure has been taken from [29].
32
5.3 The Distributed Information Management System
A PVSSII system is an application including one event manager and one database manager and
several drivers and user interfaces. An overview of the manager structure of PVSSII is shown
in Fig. 5.3. A PVSSII system is an application including one event manager and one database
manager and several drivers and user interfaces.
The device and navigation editor (DEN) displays the hardware and logical view as well as the
finite state machine view of the system hierarchy, see Sect. 5.1. The hierarchy with the three
different views in the device and navigation editor for the TRD low voltage setup, as used during
the ALICE cosmic run at CERN in December 2007, is shown in Fig. 5.4.
Figure 5.4: Example of the TRD logical view (left), the hardware view (middle) and the finite state machine
hierarchy (right), as defined for the low voltage system as used during the ALICE cosmic run in December 2007 at
CERN. The FSM hierarchy shows the PCU CONSOLE to control the DCS board power. More details about the
TRD DCS can be found in [43].
5.3 The Distributed Information Management System
The Distributed Information Management (DIM) system was developed at CERN to connect
local devices to the supervisory layer. The DIM system is based on the client/server paradigm.
The logical architecture of the client-server paradigm is shown in Fig. 5.5. The device software
(firmware) publishes services recognized by a tagged name. The published services which contain
data sets relevant for the user are integrated into PVSSII (the client) by connecting these published
services to data points in PVSSII. This is established by a script which runs continuously in the
background of PVSSII.
33
5 The Detector Control System
Name Server
Request service
Register services
Service Info
Subscribe to service
Server
Data Sets
Client
Commands
Figure 5.5: The DIM follows the client-server paradigm. Servers provide data sets which are specified by a name
tag. The name server handles the names of all available services. The server publishes the data sets by registering
them to the name server. The clients subscribe to published services by requesting a provided service for the name
server. The client then contacts the server directly and subscribes commands to the server. This figure has been
adapted from [44].
5.4 The Detector Control System of the TRD
This section briefly summarizes the detector control system of TRD. The TRD DCS [43, 45]
is developed using the tools and utilities described in the previous sections.
The hardware
structure including the communication protocols between the hardware and supervisory level of
these subsystems is defined as shown in Fig. 5.6 and Fig. 5.7.
The controlling, monitoring and implementation into the FSM hierarchy is done for every
single subsystem, i.e for high voltage [46], the high voltage distribution system (HVDS) [41], the
global tracking unit (GTU) [47, 48, 49], the pretrigger system [37, 50, 51], front-end-electronics
(FEE) [52], low voltage (LV) [43], for cooling and the gas system. The different subsystems use
partially the same kind of hardware devices, e.g DCS boards, as shown in Fig. 5.7 and Fig. 5.6.
Therefore different DCS board software (firmware) is required.
The corresponding DCS board firmware is built on a linux system using a cross compiler for
ARM architecture [53]. The build process is governed by a Makefile keeping all instructions for
compilation and linking. For compiling and linking of the source code, the autotools autoconf [54]
and automake [55] are used.
Until recently, each subsystem using a DCS board, i.e. HVDS, PCU, GTU and FEE was identified by a DCS FLAVOR tag, e.g. trd hvds, trd pcu or trd fee. With the introduction of the
Itsy Package Management System ipkg [56], a single firmware version trd ipgk is used for all
subsystems. Subsystem specific software is installed afterward by upgrading the latest firmware
to the DCS boards using the lightweight package management system ipkg.
The user software is provided as .ipk files and available from the yum repository [57]. The projects
currently available as ipkg packages are libTRD, libdim, feeserver-dlopen and controlengine for FEE and pcu dim for PCU. The necessary package is automatically downloaded and
installed from this repository. The packages are then installed and upgraded if necessary. Especially after flashing new firmware on the DCS board or changes in the ipkg repository, an update
of the installed packages is required. For more detail on the itsy package management system and
its application with TRD, refer to [52].
34
5.4 The Detector Control System of the TRD
TRD
Control room
(ACR)
[FSM]
Database(s)
PVSS II
PVSS II
OPC
client
PVSS II
DIM
client
User interface
CR3
CR3
CR3
CR3
CR3
PVSS II
PVSS II
PVSS II
PVSS II
PVSS II
OPC client
DIMclient
DIMclient
DIMclient
DIMclient
ISEG OPCserver
CR3
E
1
E
E
DIMclient
ACC
E
CR3
DIMserver
DIMclient
E
Counting room
C
DIMclient
DIMserver
[FED]
PCI-CAN
PVSS II
wingDB
E
[FED]
Ethernet
CR3
E
4
CR4-Y12
ISEG
5
UX-I/O/C
180
26
UX-I/O/C
netgear switch
41
E
HV
DCS
board
HVD
4
DIMsrv
UX-C
DCS
board
PCU
12
HV
DIMsrv
GTU
1
540
19
DCS
board
UX-C
9
E
E
18
power
distr. box
Detector
High Voltage
DCS
board
E
Detector
Detector
Pre-trig box
Detector
GTU
Pre-trigger
HV distribution Power Control Unit
Inside magnet
André Augustinus
6
DIMsrv
1080
Cavern
12
DIMsrv
UX-I/O
netgear switch
E
540
3
DIMsrv
DCS
board
Detector
FEE
Figure 5.6: Structure of components included in the TRD Detector Control System, except for cooling, gas and
low voltage.
CR3
CR3
CR3
PVSS II
PVSS II
PVSS II
OPC client
Modbus/TCP
DIP
Wiener OPCserver
E
E
[GWG]
Gas
PVSS
UX-A
[TS/CV]
PLC
SG2
TS/CV
SCADA
Gas
Counting room
E
1
UX-C
PLC
Wiener
CR5
Cooling
Plant
Gas
Cavern
89
UX-I/O/C
LV
Detector
Detector
Detector
Low Voltage
Detector Cooling
Gas system
Inside magnet
André Augustinus
89
Figure 5.7: The second part of the TRD Detector Control System structure, including low voltage, cooling and
gas.
35
6 The Control System for the DCS-board
Power-Supply System
A graphical user interface based on the PVSSII system and a finite state machine for control
and monitoring of the DCS board power supply system were developed within this thesis. The
graphical user interface is attached to the finite state machine which allows for integration in the
global TRD detector control system [43].
The communication to the hardware is realized through a DIM client as part of the PVSSII
project connected to the DIM server running on the DCS board of the power control unit. An
overview is given in Fig. 6.1. The following sections describe the DIM server-client interface, the
structure of the run-time database, the graphical user interface and the finite state machine in
more detail.
PCU/DCS board
Remote
user
data frame
m
co
m
d
an
e
lu
va
ad
e
r
command
sw
dim_pdb_setup.c script
readvalue, statusword
PVSSII
I/O registers 9x
published
PDB
DIM
Client
PDC
2x
FPGA
Shift register
Toggle register
1x
FPGA
statusword
feedabck,readvalue (32bit)
nd
s
ma
ion
rd
com funct
wo
l
tus
na
a
r
t
e
s
,
e
Int
lu
dva
rea
command
DIM
Server
clock, strobe, data
monitoring
Scripts
and
libraries
P
re
ad VSS
va
IID com
lu
IM
m
e,
an
m
st
d
at
us ana
ge
wo
r
rd
at
ta
ch
command
s
Run-time
database
s
te
sta
Graphical
user
interface
readvalue, stausword
n
tio
ac
ed
Finite
state
machine
Linux device driver
libsw
readvalue, statusword
clie
nt
read and write commands
Ss
h
readvalue, statusword
Worker node
clock, strobe, data
RJ45
2x
feedabck,readvalue (32bit)
control signal
FET 30x
DCS board 30x
Figure 6.1: Schematic overview over the command and data direction with the tools for processing them.
36
6.1 DIM-server to DIM-client Interface
6.1 DIM-server to DIM-client Interface
The PCU uses the distributed information management (DIM) protocol to communicate with
the supervisory layer, see Sect. 5.3. The server names for the four PCUs are defined as listed in
Tab. 6.1. Since several DIM servers run on the same name server (DIM DNS NODE) the name tag
includes the subdetector (trd) and the component (pcu). These server names are defined by an
environment variable DIM SERVICENAME defined in a shell script as part of the firmware on the
PCU DCS board. The shell script sets the environment variable DIM SERVICENAME by translating
the DNS hostname of the DCS board to the DIM SERVICENAME using the lookup table, given in
Tab. 6.1. The DIM server running on the PCU DCS board publishes this variable as the name
tag of the service which is then available for the DIM client, in this case PVSSII.
DIM SERVICENAME
trd-pcu 00
trd-pcu 01
trd-pcu 02
trd-pcu 03
DNS hostname
alidcsdcb0800
alidcsdcb0801
alidcsdcb0802
alidcsdcb0803
DNS alias
alitrddcbpc00
alitrddcbpc01
alitrddcbpc02
alitrddcbpc03
Table 6.1: The lookuptable for the PCU name services. The DIM service name is the name tag.
Each PCU DCS boards publishes sixteen data sets. These are the data sets stored in the
registers listed in Tab. 3.4. To display the current status of the PCU and its connected PDCs it
is necessary to provide 10 data sets, see Tab. 6.2. The data sets 0 to 8 contain the readvalue from
data sets
0
1
2
3
4
5
6
7
8
9
10
contained data
readvalue channel 0
readvalue channel 1
readvalue channel 2
readvalue channel 3
readvalue channel 4
readvalue channel 5
readvalue channel 6
readvalue channel 7
readvalue channel 8
firmware version
statusword
Table 6.2: The channels provided by the PCU DIM Server.
each PDC. The readvalue is a 32 bit value. This readvalue contains the status of the 30 output
channels, hence the power status of the DCS board on each chamber which is on or off.
The last two bits (30 and 31) are used for debugging. Data set 10 is the statusword of the PCU.
This statusword contains the status of the nine PCU channels and the timeout flag. In the other
direction, PVSSII submits commands to the DIM server through the command channel. These
commands are parsed in libsw. Here, the function to be called in libsw as well as the addressing
to the corresponding supermodule sector, layer and stack are extracted. The addressing of the
supermodule sector to the corresponding PCU channel is given in Tab. 6.3 and Tab. 6.4. The
addressing for the layer and stack is given in Tab. 6.5. Tab. 3.4. This data is sent to the PDCs
using the serial connection as described in Sect. 3.3. The DIM server is integrated as a part of
the firmware installed on the PCU. The firmware is regularly updated due to the latest changes
37
6 The Control System for the DCS-board Power-Supply System
PCU channel
0
1
2
3
4
5
6
7
8
supermodule sector
05
06
07
08
09
10
11
12
13
Table 6.3: Relation between PCU channels
and supermodule sectors for trd pcu00 and its
backup trd pcu02.
bitnumber in readvalue
0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
PCU channel
0
1
2
3
4
5
6
7
8
supermodule sector
04
03
02
01
00
17
16
15
14
Table 6.4: Relation between PCU channels
and supermodule sectors for trd pcu01 and its
backup trd pcu03.
stack
2
2
3
3
4
4
0
0
1
1
2
2
3
3
4
4
1
1
0
0
4
4
3
3
2
2
1
1
0
0
-
layer
4
1
4
1
4
1
5
2
5
2
5
2
5
2
5
2
1
4
1
4
0
3
0
3
0
3
0
3
0
3
-
Table 6.5: The bit number in the readvalue and the corresponding stack and layer for the DCS boards. Bits 30
and 31 are used to identify errors at the corresponding channel and are used only for debugging purposes.
38
6.2 Controlling and Monitoring
regarding the ipkg used for the PCU. The actual firmware version of the PCU is accessible by a
read request on register 9 as shown in
6.2 Controlling and Monitoring
The published data sets (readvalues and statusword) from the PCU, see Tab. 6.2, are connected
to the PVSSII run time database. Therefore a data point corresponding to each of the published
data sets is created. The connection between the defined data point and the published data
set is handled by the DIM client and the proper data points are assigned through the configuration DIM ConfigPdb. The configuration DIM ConfigPdb is defined in the background script
dim pdb setup.c. This background script is added to the PVSSII console as a control manager
and starts automatically at the start up of the PCU PVSSII project. Furthermore the DIM client
of PVSSII is started by adding a PVSSIIDIM manager with the proper name of the DIM name
server specified by the DIM DNS NODE environment variable and the corresponding configuration
DIM ConfigPdb. These two added managers ensure the correct import of the data sets, in this
case the readvalues of the PDCs and the statusword. The imported readvalues and the statusword
are further processed in PVSSII by checking each single bit of the two 32 bit values. These bits
give the status of each single DCS boards and the status of the connection, activity, error and
timeout flag of the PCU.
The readvalue has 32 bits length, hence one readvalue contains the status of the DCS boards
of one supermodule. The relation between the nine channels of the PCU and the supermodule
numbering scheme is pictured in Fig. 3.8 and is given in Tab. 6.3 and Tab. 6.4. Each single bit
is assigned to one DCS board on a specific stack and layer. The assignment is given in Tab. 6.5.
According to Tab. 6.5 the commands are translated from the supervisory level system into the
corresponding data bits to control the output channels with the attached DCS boards. A bit set
to one switches the DCS board power on and off otherwise. The commands are sent through the
command channel in the DIM system. For this purpose a data point for sending commands is
included in the data point structure of PVSSII.
The defined data point structure and the graphical user interfaces which display the status of
the system are discussed in detail in the following sections.
6.2.1 The PCU data point type structure in PVSSII
First, one data point type is created with the structure shown in Fig. 6.2. The data point type
is named trdpcu. The structure of the data point type follows the logical view of the DCS
board power supply system. The data points for the four PCUs are created in PVSSII using the
same data point type. The name of the data points are: trd pcu00, trd pcu01, trd pcu02 and
trd pcu03. Each data point type is subdivided into the supermodule part and command part.
Supermodule data point type
The supermodule part is classified in the statusword which is assigned to data set 10 provided by
the DIM server. Therefore it is set as an integer variable with a length of 32 bits. The definition
of the statusword is the same as the statusword described in Sect. 3.3.1. The statusword contains
the relevant information of the connectivity and activity for the 9 PCU channels. This information
is displayed in the main control panel of the PCU PVSSII project, see Fig. 6.3.
To obtain the information about which supermodule and which channel is connected and if the
channel is active, the 32 bit statusword is investigated bit by bit. The first nine bits indicate the
PCU channels 0 to 8 and their status regarding the connection to the PDCs, if the ”connection”
bit is one the connection is established, otherwise the connection is faulty. The bits 9 to 17
39
6 The Control System for the DCS-board Power-Supply System
Figure 6.2: The structure of the PCU in the run time database of PVSSII.
indicate the activity of the nine PCU channels. If the ”activity” bit of a PCU channel is one then
the channel is active. An ”activity” bit set to zero indicates a non active channel. The bits 18 to
26 indicate if the data sent is equal to the received data. If the sent and the received data match
the bit is set to zero, otherwise it is set to one. Bit 30 is used to identify if the timeout counter is
enabled (one) or disabled (zero). The other bits (27, 28, 29 and 31) contain no information and
they are set to zero.
The positions in the statusword correspond to the PCU channels from 0 to 8 for the ”connection” bits, the ”activity” bits and the ”sent/received” bits. The nine PCU channels correspond to
different supermodule sectors, as described in Sect. 6.2. The PCU channel number is converted
to the supermodule number by using different lookup tables stored in the PVSSII library. The
library lookuptable SM Channel.ctl converts the bits to the supermodule sectors following the
mapping given in Tab. 6.3 and Tab. 6.4. The second library lookuptable SM PCU.ctl gets the
corresponding PCU number (0 to 3) by using the supermodule sector retrieved from the first
lookup table. The relation between the supermodule sector and the PCU number is shown in
Fig. 3.8.
To control the DCS boards of each single supermodule the data point type tree is divided into
supermodules. These data point types have the subsystem name attached (PCU) and the supermodule (SM) sector (00 to 17). The SMXXPCU data point types are further classified in 5 stack
data point types, the stacks are partitioned in 6 layers. The 6 layers are the last node in the data
point type structure and include the status of each DCS board in boolean format. The status of
the DCS boards is received through the readvalue for each supermodule. The 32 bit readvalue
is translated by adapting Tab. 6.5 as a lookup table in the library of PVSSII. The lookup table named lookuptable Layer Stack Single Panel.ctl is stored in the library of the PVSSII
PCU project. This lookup table translates the bitnumber of the readvalue to the corresponding
stack and layer and sets the layer data point element in the structure to TRUE or FALSE. The
layer node is set to TRUE if the bit is one, otherwise it is zero (FALSE). The readvalue as well
as the status of each DCS board (TRUE or FALSE) are displayed in a user interface for each
40
6.2 Controlling and Monitoring
supermodule, see Fig. 6.4.
Command data point type
The command data point type handles the command sent from PVSSII to the DIM server over
the command channel. The commands are all sent as strings. The commands sent to the PCU
have the following structure, there are four types of commands.
1. The on command
The on command is used to power up the DCS board. Therefore it contains the information
for the position of the target DCS board specified by supermodule, layer and stack.
Syntax: on,channel,layer,stack
E.g: on,4,3,3
• channel:
PCU channel [0...8]
• layer:
The layer number [0...5] in the supermodule. Additionally there is the option to switch
on all layers at once by using all instead of the layer number.
• stack:
The stack number [0...4] in the supermodule. Additionally there is the option to switch
on all stacks at once by using all instead of the stack number.
2. The off command
The off command switches the DCS boards power off and follows the same syntax as the
on command.
Syntax: off,channel,layer,stack
E.g: off,4,3,3
3. The update command
The update command refreshes the data provided by the DIM server, hence the values of
the corresponding data points in PVSSII.
Syntax: update.
To ensure that the values are updated regularly and thus keeping information on the actual
status, a background script sends the update command to all 4 PCUs every 5 seconds. This
background script is automatically started as part of the dim pdb setup.c script.
4. The timeout command
The timeout command is used to set the timeout expiration time of the PCU.
Syntax: timeout,expiration time
E.g.: timeout,10
The expiration time is set to values between 0 an 104 seconds. The timeout mechanism is
disabled by setting the value to 0. If the timeout counter reaches the expiration time, all
DCS board power is turned off. The timeout mechanism is disabled by sending the command:
timeout,0
Any timeout command also switches the DCS boards off in case they were on.
To display the status retrieved through the data points in PVSSII a graphical user interface was
created which is described in the next section.
41
6 The Control System for the DCS-board Power-Supply System
6.2.2 Graphical User Interface
A graphical user interface (GUI) was developed to control the DCS board power. The GUI follows
the guidelines [58] provided by the JCOP framework. The GUI are considered to be used by nonexperts in experimental shifts during the runs. Therefore the GUI design should be as simple as
possible to handle.
To control and monitor the power status of the 540 DCS boards through the four PCUs, two
panels were created, these are the main control panel and the DCS board power control panel.
The Main Control Panel
Figure 6.3: The main control panel for control and monitoring of the DCS board power supply system as operated
in the lab with one PCU. This panel controls and monitors all 4 PCUs and their status. In detail, it shows if the
connection to the PDCs is working (rectangles), if the channel is active (circles) and if the sent data corresponds
to the received data. To obtain this information the statusword is investigated bit by bit. Here PCU01 is powered
and connected to two PDCs. The connected channels are assigned to the PDBs in supermodule sector 01 and
supermodule sector 00. The mapping for the channel tosupermodule relation is given in App. A.
The first panel is the main control panel, shown in Fig. 6.3. The main control panel visualizes
the status for the channels of the PCU connected to a PDC.
The panel is divided in four parts. In the top part the buttons ALL 18 Supermodules ON and
42
6.2 Controlling and Monitoring
ALL 18 Supermodules OFF are placed. These two buttons enable or disable DCS board power
of all 18 supermodules by one click. These buttons are not tested yet due to the fact that the
panels were commissioned with only two installed supermodules. The timeout control is also
implemented in the top part. By clicking the Set Timeout button the timeout command is set
to all four PCUs at the same time. The expiration time for the timeout command is set in the
textfield above by the user. This enables the timeout mechanism with a user defined expiration
time. An enabled or disabled timeout mechanism is visualized by the rectangle left of the textfield.
The rectangle turns green if the timeout mechanism is enabled, thus if bit 30 in the statusword
is set. Otherwise the rectangle turns red. The expiration time is displayed in this rectangle. The
Disable Timeout button disables the timeout mechanism.
The middle part of the main control panel displays the status of the nine PCU channels. The
status is retrieved from the statusword. The middle part is divided in two parts. The left and the
right section display the status of each redundant PCU set. The left side of the panel shows the
status of the redundant PCU set trd pcu01 and trd pcu03 (backup). The right side displays the
same for the redundant set trd pcu00 and trd pcu02 (backup). The actual statusword stored in
the data point elements for each single PCU is displayed in hexadecimal values in the textfields.
The last command sent is displayed in the corresponding PCU Command textfield. By clicking
the button with the supermodule number (SMXX) another panel for detailed controlling and
monitoring pops up, i.e. the DCS board power control panel, shown in Fig. 6.4. The buttons with
the supermodule number are arranged according to the PCU channel which controls the DCS
board power of the corresponding supermodule.
The status of the PCU channels is indicated by the statusword. To display the status the
implemented triangles, circles and rectangles are used. The triangles, circles and rectangles turn
red or green according to the bits in the statusword as described in Sect. 6.2.1. The triangles
turn green if the ”sent/received” bit for the corresponding channel/supermodule is set to zero.
Otherwise the triangle is red. The circle becomes green if the channel is active, thus if the
”activity” bit is one. The rectangles indicate the status of connection between the PCU and the
PDB/PDC for each channel. In case the connection is established the corresponding rectangle
turns green, otherwise the bit is set to zero and it turns red.
The meaning of the triangles, circles and rectangles is given in the bottom part along with the
Close button and the actual time.
DCS board control panel for one supermodule
The DCS board power control panel is for controlling and monitoring the power status of 30 DCS
boards of one supermodule, hence to display the readvalue of one PCU channel. To monitor and
control each of the 30 DCS board independently the panel is divided in 5 stack columns and 6
layer rows. The power state of the DCS board sitting on the readout chamber of the corresponding
stack and layer is visualized by an indicator similar to LEDs. These LEDs have two defined states.
The red color indicates that DCS board power is off, green indicates a powered DCS board.
To retrieve the power status, the LEDs are connected to the boolean data point element of the
layer. In case the data point element is set to TRUE the LED shows the color green, hence the
DCS board is powered. Otherwise the data point element is set to FALSE. The DCS board power
is controlled by one power distribution box with two power distribution control boards. These
PDCs are connected to two redundant power control units (PCU). To display the readvalue of
both PDCs, thus of both redundant PCU channels, two LED lines are implemented. The LED
line on the right in each stack column displays the layer data point elements of the backup PCU.
That is either PCU02 or PCU03. The left line displays the data point elements of the PCU00 or
PCU01. The readvalue of the PCU and its backup is displayed in two textfields in hexadecimal
values. The state of the DCS board is changed by executing an action, i.e. sending a command.
43
6 The Control System for the DCS-board Power-Supply System
Figure 6.4: Dcs board power control panel for one supermodule as operated in the lab with one PCU. This panel is
a child panel of the main control panel, shown in Fig. 6.3. The power status of each DCS board in the supermodule
is indicated by a red or a green status LED.By clicking on the displayed buttons actions are enforced, i.e sending
commands.
The actions are executed by clicking the implemented buttons. The ON and OFF buttons between
the two LED columns change the power state of a single DCS board. The commands are sent
to the PCU and its backup by a single click. This ensures that the redundant PCUs always
propagate the same data to the two PDCs located in one PDB.
The STACK ON and STACK OFF buttons switch the DCS board power of one stack, i.e. the
power of 6 DCS boards. The Layer ON and Layer OFF buttons switch the DCS board power
of one layer, i.e. the power of 5 DCS boards. To switch the power of all 30 DCS boards in one
supermodule by one click, the buttons SWITCH SM DSC BOARDS ON and SWITCH SM DSC
BOARDS OFF are implemented.
In the upper left corner the supermodule sector is displayed. The commands are set in the
way that the DCS board power of the indicated supermodule is controlled, hence the commands
include the PCU channel number according to the setup shown in Tab. 6.3 and Tab. 6.4.
The integration of the PVSII PCU project in the global TRD detector control is realized by
creating a finite state machine. The finite state machine of the PCU is part of the low voltage
system in the global TRD DCS. A list of all subsystems of the global TRD detector control system
is given in App. E.
44
6.3 Finite State Machine for the Power Control Unit
6.3 Finite State Machine for the Power Control Unit
The finite state machine (FSM) for the PCU is established to integrate the power control unit
(PCU) in the hierarchy of the TRD detector control system. The PCU is part of the control
system for the low voltage, as shown in Fig. 5.4.
In general an finite state machine consists of defined states and actions, triggering the transitions
between states. The defined states for the PCU are described in the Sect. 6.3.1 and Sect. 6.3.2.
The finite state machine for the PCU is established using the device editor navigator (DEN)
of PVSSII. In this device and navigation editor the control units and the device units for the
PCU are created by defining control unit types (SMI++ classes) and device unit types (SMI++
objects). The FSM is fully integrated in the JCOP framework and the data points are not directly
visible, thus the PCU is declared as a hardware device in the hardware view of the device and
navigation editor, called TrdPcu as part of the TRD low voltage system, shown in Fig. 5.4 in the
middle picture.
The control unit types and the device unit types are created in the FSM part of the device and
navigation editor in the editor mode. For the PCU system two device unit types, the SMI++
objects, are defined, called trdpcu0002 and trdpcu0103. The SMI++ class created for the control
unit is called trdpcutype. For debugging purposes another class called trd pcuSingle is created,
thus not all four PCUs are installed. The tree for the PCU FSM is created by assigning the control
unit to the trdpcutype, that creates the SMI++ domain (PCU CONSOLE), and the installed
PCU to their proper device type, as shown in Tab. 6.6. As described in Chap. 5 the four device
PCU module
trd pcu00
trd pcu01
trd pcu02
trd pcu03
device type
trdpcu0002
trdpcu0103
trdpcu0002
trdpcu0103
Table 6.6: The PCU and the proper device type in the FSM hierarchy.
units report an overall state to the control unit. The overall state depends on the single states
of the device types. E.g. if one device unit is in the state ERROR the reported state to the
control unit is ERROR. The possible states of the device units are described in Sect. 6.3.1 and
the combinations of the device units states, leading to the reported overall state, are listed in the
App. C
6.3.1 States in the FSM
A schematic view of the TRD PCU finite state machine is shown in Fig. 6.5. In the graphical
display individual states have defined color codes according to the state(s) of the DCS boards
power or the connection between PCU and PDB/PDC. These changes are set in the statusword
and the readvalue from the respective channel. The scripts in the FSM checks these data sets
automatically within a defined time interval. The color of the states follow the guidelines declared
by the JCOP framework [59].
1. NO CONTROL
The device type script includes a 9 bit pattern (Supermodule config). This pattern indicates the low voltage power state of the supermodules. This pattern is still hard coded, the
final solution foresees that it is loaded from the database. If the supermodule is supposed
to be on but in the statusword, there is no connection bit set then the FSM node shows the
state NO CONTROL.
45
6 The Control System for the DCS-board Power-Supply System
NO_CONTROL
if LV for PCU and
PDB and no
connection failure
Connection
failure
OFF
SETTIMEOUT
Sets PCU timeout
to 10 seconds
RECOVER:
PowerCycle
PCU and
PDB
STANDBY
SETTIMEOUT
SWITCH_ON_SOME
SWITCH_OFF
SWITCH_ON
SWITCH_OFF
MIXED
ERROR
If No TImeout is
set
NO_TIMEOUT
SWITCH_ON
SWITCH_OFF_SOME
ON
Figure 6.5: Finite state machine diagram for the power control unit with defined states and transitions.
The arrows indicate actions which perform the transitions between states.
FSM State
NO CONTROL
OFF
STANDBY
MIXED
ON
NO TIMEOUT
ERROR
Color
orange
gray
blue
yellow
green
orange
red
description
Error; control is lost
Devices are switched off
Crates and boards are on; output channels are still off
Warning; units of the same kind are not in the same state
Crates and boards as well as the output channels are on
Error; Timeout not set
Fatal Error
Table 6.7: Defined states of the PCU finite state machine with their corresponding color code. The colors follow
the guidelines from the JCOP framework [59].
46
6.3 Finite State Machine for the Power Control Unit
2. OFF
The node goes to state OFF if the devices, PCU and PDB, are powered and the connection between PCU and PDB respectively PDC is established. The node goes back to
NO CONTROL when the connection between PCU and PDB is interrupted.
3. STANDBY
The PCU node in the FSM goes to state STANDBY when the timeout mechanism of the
PCU is set. The default setting of the timeout expiration time is ten seconds. Before the
timeout counter in the PCU is not enabled, there is no opportunity to switch power for any
DCS board via the top node of the PCU’s FSM. The DCS board power can still be switched
using the graphical user interfaces, the main control panel, see Fig. 6.3 and the DCS board
power supply panel, see Fig. 6.4.
4. MIXED
The MIXED state was implemented to differentiate between the circumstance if some (at
least one) or all DCS boards are powered (State ON). It is considered to be an intermediate
state.
The MIXED state also is used as an indicator for broken DCS boards because in the end
all DCS boards are powered. Therefore the node always is supposed to show the state ON.
Hence the color yellow was chosen according to the guidelines.
5. ON
The node goes to the state ON if all DCS boards are powered and the timeout mechanism
of the PCU is enabled.
6. NO TIMEOUT
If at least one DCS board is powered but the timeout mechanism is not enabled then the
node shows the state NO TIMEOUT.
7. ERROR The ERROR state requires at least one powered DCS board. If a connection line
is interrupted or the PCU or PDB looses its low voltage power, then the node switches to
the state ERROR.
The state NO CONTROL and ERROR can appear under the failure of the hardware or software.
If no DCS board is powered then there is no loss of any detector functionality. Thus the state
NO CONTROL is a warning state.
On the other hand if DCS boards are powered and the hardware or software is disfunctional in
some way then the control over the corresponding readout chamber would be lost. This implies
a not fully functional detector. Hence an ERROR occurs.
6.3.2 Actions in the FSM
The possible transitions between the states either triggered through actions or failure of the
hardware are described in this section. The actions are available in the top node (control unit)
as well as in each of the four sub nodes (device units). An action triggered from the top node is
passed to all four subnodes.
1. NO CONTROL → OFF
The node switches from NO CONTROL to OFF if the PCU and PDB have low-voltage
power. This requires no single action in the node for the PCU because it is part of the low
voltage controlling and monitoring [43].
47
6 The Control System for the DCS-board Power-Supply System
2. OFF → STANDBY: SETTIMEOUT
The transition from OFF to STANDBY follows after the action SETTIMEOUT with the
default timeout setting of ten seconds is executed. The STANDBY state is the first accessible
state were the DCS boards can be controlled from the user. The user can choose between
SWITCH ON or SWITCH ON STACK0-4.
3. STANDBY → MIXED: SWITCH ON STACK 0-4
The MIXED state is reached if some DCS boards are switched on by executing the command
e.g SWITCH ON STACK0.
4. STANDBY → ON
The node can also switch directly from STANDBY to ON if all DCS boards are powered at
once by the command SWITCH ON. In the end only this command is supposed to be used.
5. MIXED → ON
The same action as from STANDBY to ON.
6. MIXED → STANDBY: SWITCH OFF
The command SWITCH OFF switches off power of all DCS boards. Hence the node goes
to STANDBY.
7. ON → STANDBY:SWITCH OFF
See MIXED → STANDBY.
8. ON → MIXED: SWITCH OFF STACK 0-4
The command SWITCH OFF STACK0 switches off the power of 5 DCS boards of all installed supermodules. That implies not all DCS boards are powered so that the node goes
to MIXED.
9. NO TIMEOUT → STANDBY: SETTIMEOUT
If the timeout mechanism of the PCU is not enabled and DCS board are switched on, then
the node goes to the state NO TIMEOUT. The only possible command is SETTIMEOUT to
enable the timeout mechanism of the PCU. Then all DCS boards are automatically switched
off. These settings apply to the state STANDBY.
10. ERROR → NO CONTROL: RECOVER
The ERROR state is displayed according to the conditions described above. If an ERROR
occurs, a power cycle of the PCU in the ERROR state and its corresponding PDBs is
required. The framework tool provides the data points to control single channels of the
power supplies. One of these data points is connected to the power of the power supply.
The executed RECOVER command sets this boolean data point to zero. That implies
the power cycle of the attached components at this channel. Setting the PDB and PCU
low-voltage channels to zero, sets the node in the FSM to NO CONTROL because the
power is lost. After the power up sequence the node switches to the state OFF. This takes
approximately 3 seconds.
6.4 Software Commissioning
The control and monitoring system described above allows for operation of all 18 power distribution boxes and 4 power control units for full TRD. Presently two of eighteen supermodules are
installed in the ALICE TRD spaceframe, i.e. the supermodules in sector 00 and sector 08 and
only one redundant set consisting of the power control units PCU00 and PCU02 are installed in
48
6.4 Software Commissioning
State
OFF
STANDBY
STANDBY
STANDBY
STANDBY
STANDBY
STANDBY
MIXED
MIXED
ON
ON
ON
ON
ON
ON
NO TIMEOUT
ERROR
action
SETTIMEOUT
SWITCH ON
SWITCH ON STACK0
SWITCH ON STACK1
SWITCH ON STACK2
SWITCH ON STACK3
SWITCH ON STACK4
SWITCH ON
SWITCH OFF
SWITCH OFF STACK0
SWITCH OFF STACK1
SWITCH OFF STACK2
SWITCH OFF STACK3
SWITCH OFF STACK4
SWITCH OFF
SETTIMEOUT
RECOVER
State after action
STANDBY
ON
MIXED
MIXED
MIXED
MIXED
MIXED
ON
STANDBY
MIXED
MIXED
MIXED
MIXED
MIXED
MIXED
STANDBY
NO CONTROL
Table 6.8: Actions in the PCU object modeled as FSM.
the ALICE cavern. However, sector 00 belongs to PCU01 and PCU03, as shown in Fig. 3.8. To
operate the DCS board power of both supermodules redundantly, the following changes have been
applied. The relation of the PCU channel to the supermodule sector within PVSSII, as listed
in Tab. 6.3 and Tab. 6.4, was changed as given in Tab. 6.9. These changes were applied in the
PCU channel
0
1
3
4
5
6
7
8
supermodule sector
00
01
07
08
09
10
16
17
Table 6.9: The relation between supermodule number and the PCU channel used during the cosmic run and till
the end of 2008.
lookup tables and stored in the PVSSII library with the names:
lookuptable SM Channel CERN A.ctl and lookuptable SM PCU CERN A.ctl.
Accordingly, the dim pdb setup.c background script was changed to account for the different
relation between data points in PVSSII and data sets published by the DIM server and was named
dim pdb setup CERN A.c. With this setup, commissioning took place during a two weeks ALICE
run with cosmic events. Both TRD supermodules were successfully operated. However, later in
the run one power control unit was removed due to mechanical instability. After the removal
the panels of the graphical user interface did not monitor the actual status of the DCS board
power. Further investigation indicated that the displayed readvalues and statuswords did not get
updated anymore. This behavior is not yet understood and will be further investigated during
the upcoming ALICE run with cosmic events starting in February 2008.
49
7 Summary
Within this thesis, 18 (+1 spare) power distribution boxes (PDB) were produced based on
an existing prototype developed in an earlier Master thesis. Some improvements were made to
enhance mechanical stability, e.g. the connectors to the power distribution control boards were
extended and plastics screws with washers were put to firmly mount the boards on the power
distribution box.
All power distribution boxes were successfully tested at the Institute of Physics in Heidelberg.
The tests did show high reliability of the production. Only minor errors, e.g. broken LEDs indicating the individual states of the 30 output channels occurred. All power distribution boxes are
ready for installation into TRD supermodules at the supermodule construction site at University
of Münster.
A control system was developed providing a graphical user interface based on the program
package PVSSII. Further, a finite state machine was defined and implemented for automized
operation using the program language SMI++. This system is part of the TRD detector control
system and was installed on the TRD low voltage worker node in the counting room of ALICE.
Commissioning took place during a two weeks ALICE run with cosmic events in December 2007.
The two TRD supermodules already installed at that time were operated successfully. When
removing one of the two redundant power control units the actual status of the DCS board power
was not monitored correctly anymore. This remains an open issue and will be investigated further
in the next ALICE run with cosmic events in February 2008.
Access control, i.e. assigning certain privileges to users giving them access to all or a restricted
part of the graphical user interface is still to be implemented [43].
The project developed in this thesis allows for operation of all 18 power distribution boxes
and four PCUs, thus providing DCS board power and control for full TRD. With the continuing
installation of more TRD supermodules and the scheduled startup of the LHC in summer 2008,
successful operation of the TRD DCS board power supply and its control system is expected.
50
A Mappings
Unbenannt
#
## PCU output to SM channel mapping
#
# by David Emschermann
# version 0.1, 22.01.2007
# PCU crate - DCS hostnames and aliases
#---------------------------------------DCS_00
alidcsdcb0800 alitrddcbpc00
DCS_01
alidcsdcb0801 alitrddcbpc01
DCS_02
alidcsdcb0802 alitrddcbpc02 (backup of 00)
DCS_03
alidcsdcb0803 alitrddcbpc03 (backup of 01)
# PCU channel mapping
# front view of the PCU crate
#---------------------------------------------------------------------------primary system
| backup system
DCS_00
| DCS_01
|
DCS_02
| DCS_03
#-------------------------------------|-------------------------------------ch SM cable | ch SM cable | ch SM cable | ch SM cable
ch_0 - SM05 - 316 | ch_0 - SM04 - 314 | ch_0 - SM05 - 317 | ch_0 - SM04 - 315
ch_1 - SM06 - 318 | ch_1 - SM03 - 312 | ch_1 - SM06 - 319 | ch_1 - SM03 - 313
ch_2 - SM07 - 320 | ch_2 - SM02 - 310 | ch_2 - SM07 - 321 | ch_2 - SM02 - 311
ch_3 - SM08 - 322 | ch_3 - SM01 - 308 | ch_3 - SM08 - 323 | ch_3 - SM01 - 309
ch_4 - SM09 - 324 | ch_4 - SM00 - 306 | ch_4 - SM09 - 325 | ch_4 - SM00 - 307
ch_5 - SM10 - 326 | ch_5 - SM17 - 340 | ch_5 - SM10 - 327 | ch_5 - SM17 - 341
ch_6 - SM11 - 328 | ch_6 - SM16 - 338 | ch_6 - SM11 - 329 | ch_6 - SM16 - 339
ch_7 - SM12 - 330 | ch_7 - SM15 - 336 | ch_7 - SM12 - 331 | ch_7 - SM15 - 337
ch_8 - SM13 - 332 | ch_8 - SM14 - 334 | ch_8 - SM13 - 333 | ch_8 - SM14 - 335
#----------------------------------------------------------------------------
# PCU power inputs :
#-----------------------------------------------------input A : DCS_00, DCS_01
- alidcswie090
input B :
DCS_02, DCS_03 - alidcswie091
input C : DCS_00, DCS_01, DCS_02, DCS_03 - alidcswie092
Figure A.1: The PCU channels as engraved in the front panel are assigned to one supermodule in which the
connected PDB is situated.
Seite 1
51
B Summary of test results
PDB Serial Number
00
01
02
03
04
06
07
08
09
10
11
12
13
14
15
16
17
18
19
Test Result
ok
ok
ok
ok
ok
ok
ok
ok
ok
ok
ok
ok
ok
ok
ok
ok
ok
ok
ok
Remark
one output channel repaired (hot wire)
output channel L4 S4 was broken, repaired
-
Table B.1: Summary of PDB test result. Details of the test procedure are described in Sect. 4. The first column
lists the PDB serial number, as labeled on the front side of the PDB.
52
C The overall state
This appendix lists the code to generate the overall state from the four device units, hence the
two device types trdpcu0002 and trdpcu0103. The generated overall state is reported to the
control unit. The code is created in the editor mode of the device and navigation editor under
the FSM tab. The states here represent the overall state of the control unit which is generated
under the ”when” conditions listed below. Additionally the possible actions of the control unit
are listed. The corresponding action is passed to all four device units simultaneously.
state: OFF
when ( ( $ANY$trdpcu0002 in state ERROR ) or
( $ANY$trdpcu0103 in state ERROR ) ) move to ERROR
when ( ( $ANY$trdpcu0002 in state STANDBY ) and
( $ANY$trdpcu0103 in state STANDBY ) ) move to STANDBY
when ( ( $ANY$trdpcu0002 in state ON ) and
( $ANY$trdpcu0103 in state ON ) and
( $ALL$trdpcu0103 not in state STANDBY ) and
( $ALL$trdpcu0002 not in state STANDBY ) ) move to ON
action: SETTIMEOUT
state: ON
when ( ( $ANY$trdpcu0002 in state ERROR ) or
( $ANY$trdpcu0103 in state ERROR ) ) move to ERROR
when ( ( $ANY$trdpcu0002 in state STANDBY ) and
( $ANY$trdpcu0002 in state STANDBY ) ) move to STANDBY
when ( ( $ALL$trdpcu0002 in state OFF ) and
( $ALL$trdpcu0103 in state OFF ) ) move to OFF
action: SWITCH OFF
state: STANDBY
when ( ( $ANY$trdpcu0002 in state ERROR ) or
( $ANY$trdpcu0103 in state ERROR ) ) move to ERROR
when ( ( $ANY$trdpcu0103 in state ON ) and
( $ANY$trdpcu0002 in state ON ) ) move to ON
when ( ( $ALL$trdpcu0002 in state OFF ) and
( $ALL$trdpcu0103 in state OFF ) ) move to OFF
action: SWITCH ON
action: SWITCH ON STACK0
action: SWITCH ON STACK1
action: SWITCH ON STACK2
action: SWITCH ON STACK3
action: SWITCH ON STACK4
state: ERROR
when ( ( $ALL$trdpcu0002 not in state ERROR ) and
( $ALL$trdpcu0103 not in state ERROR ) ) move to STANDBY
action: RECOVER
state: NO CONTROL
when ( $ALL$FwCHILDREN in state NO CONTROL ) move to NO CONTROL
53
C The overall state
state:
state:
54
MIXED
action:
action:
NO TIMEOUT
action:
SWITCH ON
SWITCH OFF
SETTIMEOUT
D Installation of the PCU project
This chapter summarizes the main steps to install the PVSSII PCU project as standalone project
in PVSSII.
1. Create a new PVSSII project.
2. Download the trd pcu package from the repository
scp -co http://alice.physi.uni-heidelberg.de/cgi-bin/viewvc/bin/cgi/viewcvs.cgi/
PVSS packages/:Folder.
3. Start the Device and Navigation Editor (DEN).
4. Install the trd pcu package using the framework installation tool.
5. Import the scripts (trdpcu0002 and trdpcu0103) for the FSM device types through the
”Configuration Object Type” panel from the library.
6. Create the FSM tree, one control unit (trdpcutype or trd pcuSingle) and the device units
(trd pcu00 - trd pcu03).
7. Add the MainControlPanel.pnl panel in settings.
8. Set the proper DIM DNS NODE, e.g. to alitrddimdns at CERN, in the DEN and start the
DIM manager (PVSSDIM in the console) after setting the DIM DNS NODE in the properties
of the manager.
9. Start All.
55
E DCS project distribution at CERN
The TRD detector control system is distributed over several worker nodes. The various PVSSII
systems interface with each other using the distributed manager. The PCU project is installed
on the worker node alitrdwn001, as part of the trd lv project, in Counting Room CR3 of ALICE.
An overview of the TRD worker nodes and their installed PVSSII projects is given in Tab. E.1.
Computer
alitrdon001
alitrdwn001
alitrdwn002
alitrdwn003
alitrdwn004
alitrdwn007
alitrdwn008
DCS task
Operator node
Worker node
Worker node
Worker node
Worker node
Worker node
Worker node
PVSS project
trd
trd lv
trd hv
trd fed
trd gtu, trd pretrig
trd gas, trd cool
trd-hvd
TRD task
Top-node FSM
LV, PCU Control
HV control
FED control
PreTrigger, GTU control
gas, cooling
hv-distribution box
Table E.1: The distribution of the TRD detector control system among various operator and worker nodes in the
CR3 of ALICE. This distribution has been taken from [43].
56
Glossary
AC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Alternating Current
ALICE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .A Large Ion Collider Experiment
ARM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Advanced RISC Machine
ATLAS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A Toroidal LHC Apparatus
BNL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Brookhaven National Laboratory
CMS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Compact Muon Solenoid
CU . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Control Unit
DCS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Detector Control System
DEN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Device and Navigation Editor
DIM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Distributed Management System
DNS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Domain Name System
DP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Data Point
DPT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Data Point Type
DPE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Data Point Element
DU . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Device Unit
FEE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Front End Electronics
FPGA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Field Programmable Gate Array
FSM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Finite State Machine
GTU . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Global Tracking Unit
HMPID . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . High Momentum Particle Identification Detector
ITS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Inner Tracking System
JCOP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Joint Controls Project
LAN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Local Area Network
LED . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Light Emitting Diode
LEIR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Low Energy Ion Ring
LINAC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Linear Accelerator
57
E DCS project distribution at CERN
LHC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Large Hadron Collider
MCM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Multi Chip Module
OLE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Object Linking and Embedding
OPC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . OLE for Process Control
PASA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Preamplifier and Shaper
PCU . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Power Control Unit
PDB . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Power Distribution Box
PDC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Power Distribution Control Board
PHOS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Photon Spectrometer
PLC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Programmable Logic Controller
PLD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Programmable Logical Device
PS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Proton Synchrotron
PVSS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Prozessvisualisierungs und Steuerungssystem
QCD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Quantum Chromo Dynamics
QGP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Quark-Gluon-Plasma
RHIC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Relativistic Heavy Ion Collider
RJ45 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Registered Jack 45
SCADA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Supervisory and Data Acquisition System
SMD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Surface Mounted Device
SMI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . State Management Interface
SPS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Super Proton Synchrotron
TCP/IP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Transmission Control Protocol and the Internet Protocol
TPC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Time Projection Chamber
TOF . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Time of Flight
TRAP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Tracklet Processor
TRD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Transition Radiation Detector
58
Bibliography
[1] D.J. Gross and F. Wilczek, Phys. Rev. Lett. 30 (1973) 1343.
[2] H.J. Politzer, Phys. Rev. Lett. 30 (1973) 1346.
[3] N. Cabibbo and G. Parisi, Phys. Lett B59 (1975) 67.
[4] J.C. Collins and M.J. Perry, Phys. Rev. Lett. 33 (1975) 1353.
[5] F. Karsch, Nucl. Phys. A698 (1996) 199c;
F. Karsch, Lect. Notes Phys. 583 (2002) 209.
[6] R. Hagedorn, Nuovo Cim. Suppl. 3 (1965) 147.
[7] X. Zhu et al., Phys. Lett. B647 (2007) 366.
[8] A. Andronic et al., Nucl. Phys. A789 (2007) 334.
[9] L Yan, P. Zhuang and N. Xu, Phys. Rev. Lett. 97 (2006) 232301.
[10] M. Djordjevic and M. Gyulassy, Acta. Phys. Hung. A24 (2005).
[11] B. Zhang, L.W. Chen and C-M. Ko, Phys. Rev. C72 (2005).
[12] P. Braun-Munzinger and J. Stachel, Nucl. Phys. A690 (2001) 119c.
[13] A. Andronic et al., Phys. Lett. B571 (2003) 36.
[14] H. Satz and T. Matsui, Phys. Lett. B178 (1986) 416.
[15] M.C. Abreu et al., Phys. Lett. B499 (2001) 85.
[16] A. Capella, A.B. Kaidalov and D. Sousa, Phys. Rev. C65 (2002) 054908.
[17] P. Braun-Munzinger and J. Stachel, Phys. Lett. B490 (2000) 196.
[18] R.L. Thews, M. Schroedter and J. Rafelski, Phys. Rev. C63 (2001) 054905.
[19] L. Grandchamp et al., Phys. Rev. C73 (2006) 064906.
[20] B. Zhang, Phys. Lett. B647 (2007) 249.
[21] V. Greco, C.M. Ko, R. Rapp, Phys. Lett. B595 (2004) 202.
[22] Z.W. Lin and D. Molnar, Phys. Rev. C68 (2003) 044901.
[23] A. Adare et al., Phys. Rev. Lett. 98 (2007) 232301.
[24] A. Andronic et al., Phys. Lett. B652 (2007) 259.
[25] J. Steckert, Master Thesis, Fachhochschule Karlsruhe (2007);
http://www.kip.uni-heidelberg.de/ti/publications/diploma/2007JensSteckert.pdf.
59
Bibliography
[26] L. Evans, New Journal of Physics 9 (2007) 335.
[27] D. Manglunki, PS Div. CERN (2001);
http://ps-div.web.cern.ch/ps-div/PS/complex/accelerators.pdf.
[28] http://www.cern.ch.
[29] ALICE Collaboration, ALICE TDR 010, CERN-LHCC-2003-062 (2004).
[30] P. Braun-Munzinger and J. Stachel, Nature 448 (2007) 302.
[31] http://doc.cern.ch//archive/electronic/cern/others/multimedia/
poster/poster-2004-004.pdf.
[32] B. Dönigus, Diploma Thesis, TU Darmstadt (2007);
http://www-linux.gsi.de/d̃oenigus/diploma.pdf.
[33] ALICE Collaboration, J. Phys. G: Nucl. Part. Phys. 32 (2006), 1295-2040.
[34] ALICE Collaboration, ALICE TRD 09, CERN/LHCC 2001-021 (2001);
https://edms.cern.ch/document/398057/1.
[35] V. Angelov, Nucl. Instr. Meth. Res. A563 (2006), 317-320.
[36] C. Lippmann, SNIC Symposium, SNIC-2006-0043, Stanford, CA (2006).
[37] S. Zimmer, Diploma Thesis, University of Heidelberg, in preparation.
[38] WIENER Plein & Baus GmbH; http://www.wiener-d.com.
[39] D. Emschermann, private communication, Heidelberg (2007).
[40] T. Krawutschke, Dissertation, University of Heidelberg, in preparation.
[41] D. Emschermann, Dissertation, University of Heidelberg, in preparation.
[42] S. M. Schmeling, CERN-JCOP-2004-016 (2004).
[43] J. Mercado, Dissertation, University of Heidelberg, in preparation.
[44] C. Gaspar et al., Distributed Informatin Management System, EP Division, CERN (2006);
http://dim.web.cern.ch/dim/.
[45] J. Mercado, The ALICE Transition Radiation Detector Control System, Proceedings of the
IEEE conference, Knoxville, TN (2007).
[46] K. Watanabe, Master Thesis, University of Tsukuba, in preparation.
[47] F. Rettig, Diploma Thesis, University of Heidelberg (2007);
http://www.kip.uni-heidelberg.de/ti/publications/diploma/2007FelixRettig.pdf.
[48] S. Kirsch, Diploma Thesis, University of Heidelberg (2007);
http://www.kip.uni-heidelberg.de/Veroeffentlichungen/ps/1818.pdf.
[49] J. De Cuveland, Dissertation, University of Heidelberg, in preparation.
[50] B. Dönigus, Dissertation, TU Darmstadt, in preparation.
[51] M. De Gaspari, Dissertation, University of Heidelberg, in preparation.
60
Bibliography
[52] U. Westerhoff, Diploma Thesis, University of Münster, in preparation.
[53] http://www.arm.com.
[54] http://www.gnu.org/software/autoconf/.
[55] http://www.gnu.org/software/automake/.
[56] http://handhelds.org/moin/moin.cgi/Ipkg.
[57] http://alice.physi.uni-heidelberg.de/cgi-bin/viewvc/bin/cgi/viewcvs.cgi/.
[58] A. Augustinus et al., ALICE-INT-2006-006, EDMS Id 742954 (2006).
[59] M. Boccioli and G. De Cataldo, ALICE DCS FSM integration guidelines, Version 0.4 (2007);
http://alicedcs.web.cern.ch/AliceDCS/IntegrationDCS/examples/
Alice DCS FSM integration guidelines 0.4.doc.
61
62
Acknowledgments
At this point I would like to express my gratitude to the people who made this thesis possible
and supported me:
I’m deeply indebted to my supervisor Dr. Kai Schweda who gave me the unique opportunity to
work in such an interesting field as ALICE TRD. During this year I profited very much from his
enthusiasm and advice. He was always interested in the current status of my work and answered
patiently my questions.
I’m thankful that Professor Dr. Ulrich Uwer agreed to be the second corrector of this thesis.
Dr. Tom Dietel from the University of Münster I’d like to thank for reading my thesis to reduce
mistakes and his suggestions which improved this thesis.
Dipl.Phys. Tobias Krawutschke I thank for helping me with all issues about the DCS board especially to understand the firmware.
Dipl.Phys.Cand. Stefan Zimmer I’d like to thank for his help solving computer problems in the
lab.
Dipl.Phys.(FH) Jens Steckert and Dr. Venelin Angelov I want to thank for establishing the basis
of my project by developing and building the prototype and helping me to understand it.
Dr. Ken Oyama I want to thank for reading my thesis very detailed and reducing mistakes. I
also want to thank him for his help in all TRD related topics, especially hardware and computer
related issues.
Special thanks go to Dipl. Phys. David Emschermann for helping me setting up the teststand.
I’m especially indebted to M. Sc. Jorge Mercado for guiding my first steps into PVSSII, SMI++
and the TRD detector control system and patiently answering every little question. I profited
very much from his experience in these topics.
Furthermore I want to thank all other TRD group members for providing such a nice environment.
Finally I want to thank my parents for their encouragement and support, which made my studies possible.
This work has been supported by the Helmholtz Association under contract number VH-NG-147
and and the Federal Ministry of Education and Research under promotional reference 06HD197D.
Was this manual useful for you? yes no
Thank you for your participation!

* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project

Related manuals

Download PDF

advertising