Creator
Publisher
Date
20080616
Description
In view of the upcoming ALICE experiment, a dedicated detector to study ultrarelativistic heavy ion collisions at the Large Hadron Collider (LHC) at CERN, the present thesis has been devoted to the study of Elliptic Flow, i.e. the azimuthal anisotropy in the momenta distribution of the final state particles produced in the collision. The anisotropic flow is a key observable to study the thermodynamic properties and the Equation of State of the system created in the collision: the final momenta anisotropy can be connected to the spatial anisotropy of the initial state only by assuming that the system's constituents are strongly coupled and the system behaves as a relativistic fluid. The expected values of elliptic flow and charged multiplicity have been extrapolated to LHC energy (for leadlead collision at 5.5 TeV per nucleon) in two independent ways; in the Low Density Limit approximation and with the Relativistic HydroDynamic model, producing different impact parameter dependences of the elliptic flow. These predictions have been used as an input for events simulations in the AliRoot framework, to develop and test a flow analysis package for the ALICE environment. The analysis code is based on the event plane method, which has been already successfully used for flow study in other heavy ion experiments at lower energy, such as the Relativistic Heavy Ion Collider (RHIC) in Brookhaven, and the NA experiments at the Super Proton Synchrotron (SPS) at CERN. One of the biggest experimental uncertainties in measuring flow at LHC is the magnitude of nonflow effects, i.e. azimuthal correlations between collision products not due to collective flow, and therefore not correlated with the reaction plane. Depending on the analysis method, nonflow effects can introduce a systematic error in the flow measurement. Nonflow effects have been simulated using the Hijing event generator, which implements all known physics effects from a superposition of protonproton collisions. Comparison between the expected magnitude of elliptic flow and the estimated magnitude of nonflow contributions defines the applicability of the Event Plane analysis. The study also showed that nonflow effects are less important when the genuine flow or the multiplicity are large, leading to the conclusion that only peripheral reactions are heavily affected by nonflow, while the best sensitivity is achieved in semicentral collisions. It has also been observed that nonflow contributions change significantly for different particle selections and for different definitions of subevents. Therefore, with different analysis settings, it is possible to minimize them.
Type
Database
Language
Show preview
Hide preview
Elliptic Flow Measurement
at ALICE
Meting van elliptische stroming met ALICE
(met een samenvatting in het nederlands)
Proefschrift ter verkrijging van de graad van doctor aan de Universiteit Utrecht op
gezag van de Rector Magnificus, prof. dr. J.C. Stoof, ingevolge het besluit van het college voor promoties in het openbaar te verdedigen op maandag
16 juni 2008 des middags te 4.15 uur
door
Emanuele Lorenzo Simili geboren op 19 mei 1976 te Milaan, Italië
Promotor: Prof. dr. R. Kamermans Copromotor: Dr. P.G. Kuijer
ISBN: 9789039348390
Copyright c© 2008 by Emanuele Lorenzo Simili. All rights reserved. Cover: ‘o ring 8’ design by Andrea Lucca (Cky), concept by Emanuele Simili.
Dit werk maakt deel uit van het onderzoekprogramma van de Stichting voor Fundamenteel Onderzoek der Materie (FOM), die financieel wordt gesteund door de Nederlandse Organisatie voor Wetenschappelijk Onderzoek (NWO).
deep down the rabbit hole
Contents
Introduction 1
1 Heavy Ion Collisions & Anisotropic Flow 3 1.1 A hot, dense, nearly perfect liquid . . . . . . . . . . . . . . . . . . 3 1.2 Initial Conditions . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.2.1 Eccentricity in Glauber MC . . . . . . . . . . . . . . . . . 13 1.3 Medium Properties . . . . . . . . . . . . . . . . . . . . . . . . . . 15
1.3.1 Low Density Limit . . . . . . . . . . . . . . . . . . . . . . 16 1.3.2 Relativistic Hydrodynamics . . . . . . . . . . . . . . . . . 18 1.3.3 Charged Multiplicity . . . . . . . . . . . . . . . . . . . . . 20 1.3.4 Differential Flow . . . . . . . . . . . . . . . . . . . . . . . 22
1.4 NonFlow correlations . . . . . . . . . . . . . . . . . . . . . . . . 23
2 Experimental Setup and Analysis Framework 25 2.1 The ALICE detector at LHC . . . . . . . . . . . . . . . . . . . . . 25
2.1.1 ITS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 2.1.2 TPC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 2.1.3 TRD and TOF . . . . . . . . . . . . . . . . . . . . . . . . 32
2.2 The OffLine Framework . . . . . . . . . . . . . . . . . . . . . . . 33 2.2.1 ROOT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 2.2.2 AliRoot and the ALICE Offline Project . . . . . . . . . . . 34 2.2.3 Event Generators . . . . . . . . . . . . . . . . . . . . . . . 36 2.2.4 AliEn and LGC . . . . . . . . . . . . . . . . . . . . . . . . 38
2.3 Track Reconstruction in the Central Barrel Detectors . . . . . . . . 39 2.3.1 Reconstruction of the primary vertex . . . . . . . . . . . . . 41 2.3.2 Particle identification . . . . . . . . . . . . . . . . . . . . . 42 2.3.3 Secondary vertices . . . . . . . . . . . . . . . . . . . . . . 43
3 Flow Analysis in ALICE 45 3.1 Aim of the Flow Analysis . . . . . . . . . . . . . . . . . . . . . . . 45 3.2 Event Plane Analysis method . . . . . . . . . . . . . . . . . . . . . 47
3.2.1 Resolution . . . . . . . . . . . . . . . . . . . . . . . . . . 48 3.2.2 Autocorrelation . . . . . . . . . . . . . . . . . . . . . . . . 49
ii CONTENTS
3.2.3 Weights . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 3.2.4 Flattening Weights and Reconstruction Efficiency . . . . . . 51 3.2.5 Differential & Integrated Flow . . . . . . . . . . . . . . . . 53
3.3 Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 3.3.1 Analysis Strategy . . . . . . . . . . . . . . . . . . . . . . . 55 3.3.2 The AliFlow package . . . . . . . . . . . . . . . . . . . . . 57
3.4 Other Analysis Methods . . . . . . . . . . . . . . . . . . . . . . . 57 3.4.1 Applicability . . . . . . . . . . . . . . . . . . . . . . . . . 60
4 Feasibility of the Event Plane analysis 63 4.1 NonFlow estimate with Hijing . . . . . . . . . . . . . . . . . . . . 63 4.2 Flow simulation with GeVSim . . . . . . . . . . . . . . . . . . . . 69 4.3 Flow + nonflow . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
5 Simulations & Results 75 5.1 Efficiency study . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
5.1.1 Efficiency & Purity . . . . . . . . . . . . . . . . . . . . . . 76 5.1.2 Particle Composition . . . . . . . . . . . . . . . . . . . . . 78 5.1.3 Multiplicity (in)dependence . . . . . . . . . . . . . . . . . 80 5.1.4 Main Vertex . . . . . . . . . . . . . . . . . . . . . . . . . . 80
5.2 Cut optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . 82 5.2.1 Final corrections . . . . . . . . . . . . . . . . . . . . . . . 87 5.2.2 Systematic Error . . . . . . . . . . . . . . . . . . . . . . . 89
5.3 Genuine flow reconstruction (GeVSim) . . . . . . . . . . . . . . . 90 5.3.1 Simulations details . . . . . . . . . . . . . . . . . . . . . . 90 5.3.2 Event plane determination and resolution study . . . . . . . 93 5.3.3 Differential flow of charged particles . . . . . . . . . . . . 97 5.3.4 Integrated v2 . . . . . . . . . . . . . . . . . . . . . . . . . 100 5.3.5 Systematic and Statistical Error on the measured v2 . . . . . 101 5.3.6 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . 103
5.4 Realistic scenario (Hijing + AfterBurner) . . . . . . . . . . . . . . 104 5.4.1 Simulations details . . . . . . . . . . . . . . . . . . . . . . 104 5.4.2 Event plane and resolution . . . . . . . . . . . . . . . . . . 107 5.4.3 Differential and integrated flow . . . . . . . . . . . . . . . 108
5.5 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
6 Conclusions 113
A Class Description 115
Summary 131
Samenvatting 133
Introduction
“[...] Thus grew the tale of Wonderland: Thus slowly, one by one, Its quaint events were hammered out  And now the tale is done, And home we steer, a merry crew, Beneath the setting sun [...] ”
Lewis Carrol
The work presented in this thesis is dedicated to the physics of highenergy heavy ion collisions, which offer a very rich playground for studying fundamental properties of strongly interacting matter, such as quarks and gluons, under extreme conditions of energy and density.
From the experimental point of view, quarks are not observed as ‘free’ particles, since the strong force keeps them confined into hadrons. Hadrons are classified as mesons, which are made of quarkantiquark pairs, and baryons, which are made of three quarks. The most common baryons are the proton and the neutrons, which are found in the atomic nuclei of all the stable matter in the universe.
The Quantum ChromoDynamics (QCD) successfully accounts for fundamental properties observed in high energy experiments and can correctly describe the spec tra and the quark configuration of all known hadrons. However, why ‘confinement’ happens in first place is still an open question of QCD, and the existence of a more crowded configurations of quarks and gluons, who behaves almost as free particles inside a confined volume, is not excluded. In relativistic heavy ion collision there’s a glimpse of the creation of such a state, known as QuarkGluon Plasma (QGP).
There are experimental evidences of the QGP, mainly based on the collective behavior of the system created in the collision, in particular on its evolution which seems to be well described by relativistic hydrodynamic. A key observable to study the thermodynamic properties of the QGP is the ‘elliptic flow’, i.e. the azimuthal anisotropy in the momenta distribution of the particles produced in the collision, which can be connected to the Equation of State of the system.
ALICE is a dedicated heavy ion detector for the reconstruction of leadlead col lisions at the Large Hadron Collider, being built between the years 2002 and 2008 at CERN. The main purpose of the ALICE experiment is to study the properties of the QGP at collision energies never achieved before.
Unfortunately, the present thesis has been developed when LHC was still under construction, and therefore the entire work presented here is based on simulations.
2 Introduction
Efforts have been devoted to both the development of parametrizations of the main observables in PbPb collisions at LHC energy, and the implementation of analysis tools interfaced to the ALICE environment.
The present thesis should be seen as a first example of physics analysis with AL ICE, to point out the possible sources of uncertainty in this kind of measurement. More accurate ways to perform the flow analysis should and will be developed in the exciting future of the experiment. Fig.1 shows a full 3D simulation of a heavy ion event, as it will be ‘seen’ by the ALICE detector.
Figure 1. 3D display of a simulated collision in ALICE (picture generated with the Event Display in AliRoot).
The thesis is organized as follows. Chapter 1 gives an overview of the theo retical background of heavy ion collisions, focusing the attention on the concept of ‘anysotropic flow’ and on the extrapolation of v2 to LHC energy. Chapter 2 presents the ALICE detector, and also the software framework used to simulate and analyse the data. In chapter 3 the event plane analysis method is introduced, together with its implementation in the ALICE environment, a brief overview of other analysis methods is also given. Chapter 4 is dedicated to the feasibility of the event plane analysis, considering the presence of nonflow effects as expected at LHC. Chapter 5 shows a complete analysis of simulated data with full detector reconstruction, and studies the possible sources of uncertainty. Finally, chapter 6 draws some conclu sions and gives an outlook about how to improve the measurement.
Chapter 1
Heavy Ion Collisions & Anisotropic Flow
Heavy ion collisions are meant to study the physics of nuclear matter under ex treme conditions of energy and density, to characterize the fundamental properties of strongly interacting fields.
The major issue of this thesis is the measurement of Elliptic Flow, an observable which provides a test of the initial Equation of State of the produced medium in a domain where perturbative QCD does not apply.
The first section of this chapter will present the general understanding of heavy ion collisions, from the experimental observables to their interpretations and the underlying theory (see sec.1.1). The following section (sec.1.2) will describe the initial condition of the system created in the collision and its description in terms of a Glauber model. Section 1.3 is dedicated to the medium properties, i.e. what has been observed so far by existing experiments and the description of anisotropic flow and in term of a Fourier decomposition. The observed scaling of v2 will be also discussed, and some extrapolations of v2 to LHC energies will be made. The last section (sec.1.4) will briefly introduce the concept of nonflow effects, postponing their detailed study to chapter 4 and 5.
1.1 A hot, dense, nearly perfect liquid The strong interaction between quarks is described by the Quantum Chromo Dy namics (QCD) in which the color degrees of freedom are introduced.
One of the characteristic features of QCD is that the coupling strength increases with the distance between the interacting quarks. In fact, the interaction becomes so strong that in ordinary matter quarks are permanently confined to colorless ha drons. At large momentum transfer, however, the running coupling constant αs(q2) decreases logarithmically, leading to a weak coupling of quarks and gluons called asymptotic freedom. In this regime perturbative QCD (pQCD) can be applied, lead
4 Heavy Ion Collisions & Anisotropic Flow
ing to (approximated) analytical solutions which have been widely tested in high energy physics experiments.
Over the last years, more and more attention has been devoted to the question about how a strongly interacting medium will respond to a dramatic increase of the energy density. Considerable progress has been made by numerically solving the QCD field equation on a spacetime lattice, the latticegauge calculations. These calculations, which have been refined in recent years [1–4], show a phase transition from ordinary matter to a new state where the color degrees of freedom are released. This new state is called Quark Gluon Plasma (QGP) and is expected to occur at a temperature of about 175 MeV and an energy density of 0.7 GeV/fm3 (fig.1.1(a)). Lattice QCD calculations also provide quantitative information about the pressure of the system around its phase transition to this color deconfined state (fig.1.1(b)).
0 2 4 6 8 10 12 14 16
100 200 30 0 40 0 500 60 0 T [MeV]
ε /T
4
ε SB /T 4
T c = (173 +/ 15) MeV ε c ~ 0.7 GeV/fm
3
3 flavor 2 flavor
0
1
2
3
4
5
100 20 0 300 40 0 500 60 0 T [MeV]
p/ T
4
p SB /T 4
3 flavor 2 flavor
Figure 1.1. (a) Lattice QCD results (for 2 and 3 quark flavors) for energy density and pressure as a function of the temperature around the QGP phase transition [2]. The rapid increase of the energy density around Tc indicates a rapid increase of the degrees of free dom in the system. (b) Pressure vs temperature from lattice calculations, showing that the pressure changes smoothly during the phase transition [4].
In the Big Bang theory of cosmology, the universe undergoes this phase tran sition at approximately 10 µsec after the Big Bang [5]. This phase transition is believed to be now accessible by laboratory experiments. By colliding atomic nu clei at extremely high energy, it is possible to achieve an energy density high enough for the QGP phase transition to take place.
The Relativistic Heavy Ion Collider (RHIC), which has been operational for the last 7 years at the Brookhaven National Laboratory, can collide gold nuclei up to 200 AGeV obtaining an energy density of 10 GeV/fm3 [6]. The energy density will be about one order of magnitude higher at the upcoming Large Hadron Collider (LHC) at CERN, where lead nuclei will collide at √sNN = 5.5 TeV.
Fig.1.2(a) shows schematically the evolution of the system after the collision. The system is created at t = 0, and after a preequilibrium stage (the detailed phy sics behind this stage is still unclear) the system enters the QGP phase, and keeps on expanding. When the system is cooled down to the chemical freezeout, the cons
1.1 A hot, dense, nearly perfect liquid 5
tituents hadronize into colorless hadrons, but they still elastically interact until the final decoupling and kinetic freezeout. Then the system is dilute enough to proceed its expansion as a free streaming of particles.
TargetPr oje
ctil e
Ti m
e
Space
Freezeout
Formation (preequilibrium)
Quark Gluon Plasma
0 1 2 3 40 0.2 0.4 0.6 0.8
1 1.2 1.4
ε (GeV/fm3)
p (G
eV /fm
3 ) EOS with phase transition Ideal quark gluon EOS Hadron gas EOS
Hot and dense phase(s)
Time
(a) (b)
Figure 1.2. (a) Schematic view of the collision in 2 dimensional spacetime. (b) Pressure versus energy density for an ideal gas, a hadron resonances gas, and a QGP with a phase transition.
The creation of the hot and dense phase by the RHIC experiments and the dis covery that this state seems to behave as a nearly perfect fluid is considered the major physics discovery of 2005 by the American Institute of Physics [5]. This discovery was based on the collective behavior of the produced medium, especially observed in its anisotropic flow. However, it is still heavily disputed [7, 8].
‘Anisotropic Flow’ is a phenomenological term used to describes the collective evolution of the system, observed as an overall pattern which correlates the mo menta of the final state particles. This pattern is believed to develop due to the initial asymmetry of the collision 1 and is conserved by the presence of multiple interactions between the system constituents before the kinetic freezeout, indicat ing that the system created in a heavy ion collision is definitely different from a superposition of protonproton collisions 2.
The underlying physics of anisotropic flow is usually described in terms of a pressure gradient, which is intimately related to the Equation of State of the system (see fig.1.2(b) where this relation is given for the EoS of an ideal gas, of a hadron resonance gas, and for an EoS with phase transition). For this reason the study of flow provides a sensitive tool to characterize the strongly interacting system created in the heavy ion collision.
Condensed matter experiments [10] also show that in a compressed gas of fer mions the pressure and energy density reach their maximum in the center of the the
1When the collision is not central and the interaction volume is shaped as an almond (see gig.1.3). 2Beside anisotropic flow, there are many other aspects which clearly distinguish AA from pp
collisions, from dN/dη to strangeness enhancement to J/Ψ suppression. See the reference [9] for a more comprehensive overview.
6 Heavy Ion Collisions & Anisotropic Flow
system and decrease toward the outside, until they reach a common value close to zero on the system boundary. The different size of the system with respect to the azimuthal coordinate causes the pressure gradient to be larger where the distance between the center and the boundary is shorter, and this azimuthal dependence of the pressure gradient drives the evolution of the system toward an anisotropic ex pansion.
Fig.1.3 gives a 3D representation of a noncentral collision. The reaction plane is defined by the beam direction and by the direction of the impact parameter 3 (the z and x axes respectively). The almond in the middle of the figure is the reaction volume where the participating nucleons take part to the interaction, the two half spheres represent the spectator nucleons, flying away from each other more or less along the beam direction.
x
z
y
Figure 1.3. Schematic 3D picture of a noncentral collision, showing the reaction plane, the almond shape of the interaction volume (participants) and the spectator nucleons flying away in opposite directions. The coordinate system of the event has the x axes oriented in the direction of the impact parameter, the z axis along the beam, and the y axis completes the cartesian system.
The observed flow mainly consist of a combination of two different patterns: a radial expansion (affecting the thermal spectra of final state particles) and a non isotropic one (affecting the spatial orientation of particles momenta). The latter arising from the initial spatial asymmetry of the reaction volume.
In noncentral collisions the azimuthal distribution of final state particles turns out to be highly anisotropic, therefore it is possible to determine an event plane Ψ with respect to which the angular distribution of particles momenta shows a strong cos (n [φ−Ψ]) dependence, called anisotropic flow.
3The impact parameter is the distance between the centers of the two colliding nuclei, usually called ~b.
1.1 A hot, dense, nearly perfect liquid 7
The standard way to characterize anisotropic flow uses a Fourier expansion of the Lorentz invariant distribution of the outgoing particles [11]:
E d3N
dp3 =
1
2π
d2N
pTdpTdy
( 1 +
+∞∑ n=1
vn(pT , y) cos [n(φ−ΨR)] ) , (1.1)
where φ is the azimuthal angle of each particle and ΨR is the reaction plane an gle, both measured in the laboratory frame. The first and second coefficient of the expansion, v1 and v2, are called directed and elliptic flow, respectively.
Elliptic flow at midrapidity (η ∼ 0) is particularly interesting because it reflects the asymmetry of the region where most of the new particles are produced.
In the current interpretation, flow originates from the rescattering between cons tituents and the initial spatial eccentricity of the overlap region. The number of interactions (and rescattering) is larger in more central collision while the spatial eccentricity is more pronounced in peripheral collisions. The interplay between these two ingredients dominates the trend of elliptic flow versus centrality.
The huge potential of the measurement of elliptic flow at RHIC (and the fact that this is really ‘first day physics’) leads to the development described in this thesis:
• development of the analysis based on the event plane method, which is a quite straightforward and versatile formalism (see section 3.2),
• implementation of the analysis code within the complex ALICE analysis framework (see section 3.3).
To show the limits of applicability of this approach, study has been done with Monte Carlo simulations for different particle multiplicities and different magni tudes of elliptic flow. To show how different models can be tested by the ALICE experiment, extrapolations of v2 up to LHC energies have been developed.
The experimental effort of many years on the determination of the elliptic flow is summarized in a "universal scaling" of v2 (shown in fig. 1.4), i.e. all existing results can be represented on the same axis [12–14]: v2 is divided by the initial eccentricity of the reaction volume ǫ (in order to distinguish dynamics from purely geometri cal effects) and the ratio v2/ǫ is plotted versus the rapidity density of the overlap region, defined as the charged particle multiplicity at midrapidity dNch
dy divided by
the transverse area of the overlap S (see section 1.2). What is surprising is that all experimental data show an almost linear scaling
behavior, suggesting a common driving force in the development of elliptic flow. A very recent work [15] suggests that either the QGP fraction of the system or the systems lifetime might drive this scaling. The plot also shows that only at the highest RHIC energy the data are compatible with ideal relativistic hydrodynamic calculations, which are believed to hold also at even higher energies (e.g. LHC).
The systematic uncertainties on fig.1.4 are under intense study nowadays, in cluding the uncertainty on the measured v2 that arise from the presence of non flow correlations [12] (azimuthal correlations not related to the reaction plane, see sec.1.4) and from the effects of flow fluctuations [16].
8 Heavy Ion Collisions & Anisotropic Flow
0 10 20 30 40 50 600 0.05
0.1 0.15
0.2 0.25
0.3 0.35 0.4
0.45
/dy ch 1/S dN
ε/ 2
v
HYDRO (EoS Q)←
HYDRO (EoS I)←
/A=11.8 GeV, Au+Au, E877labE /A=40 GeV, Pb+Pb NA49labE /A=158 GeV, Pb+Pb, NA49labE =200 GeV, Cu+CuNNs =62 GeV, Cu+CuNNs =200 GeV, Au+Au, STAR Prelim.NNs =130 GeV, Au+Au, STAR NNs =62 GeV, Au+Au, STAR Prelim.NNs
Figure 1.4. Elliptic Flow is divided by the eccentricity of the reaction volume to distinguish dynamics from purely geometrical effects, and plotted versus the entropy density of the overlap region [14] (the x and y axis have been enlarged to cover the LHC energy range). The hydrodynamic predictions for two different EoS are shown (for an ideal gas EoS I and for a QGP with phase transition EoS Q, see sec.1.3.2), and also a linear fit of the data (see sec.1.3.1).
LHC will provide data points up to a much higher energy, where also an increase in the multiplicity is expected, which will enhance the detectability of the elliptic flow. Moreover, at the higher initial energy density, the system will probably stay longer in the partonic stage, where all the elliptic flow will be generated.
Based on the extrapolation described in sec.1.3, the most central PbPb colli sion at 5.5 ATeV can be represented on the x axis of fig.1.4 at 1
S dNch dy
≃ 60 − 80, depending on the definition of the transverse area (see sec.1.2).
To extrapolate from the existing data to the magnitude of v2 to be expected at LHC energies, two ingredients are leading:
• the geometry of the initial system (eccentricity ǫ, transverse area S), calcula ted with a Glauber model of heavy ion collisions (see section 1.2),
• the EoS of the produced medium, which is needed to transform the initial spatial asymmetry of the system to the momentum anisotropy observed in the final state (see section 1.3).
Two models have been considered to describe the properties of the produced medium and to estimate the final state momentum anisotropy with respect to the initial eccentricity of the reaction volume:
The Microscopic Transport (cascade) Model [17] describes the time evolution of the hadronic/partonic phase by solving a transport equation derived from
1.2 Initial Conditions 9
kinetic theory. In this model, collectivity depends on the interaction cross section between the constituents and the main assumption is that the mean free path is comparable to the system size (λ≫ 0). Calculations are done in a perturbative way, giving first correction to collisionless limit (free streaming). This approach, also called Low Density Limit approximation, is described in sec.1.3.1.
The Relativistic Hydrodynamic Model [18] describes the evolution of the sy stem (before the kinetic freezeout) as the expansion of volume elements of a relativistic fluid, the main assumption is that the mean free path is much smaller than the system size (λ ∼ 0). This concept appeared the first time in 1953 in a paper by Landau [19]. The system is described in terms of (classic) macroscopic quantities, such as pressure and energy density, local thermal equilibrium is assumed (thermodynamic) and an Equation of State is required. The v2 coefficient in this approach comes out to be proportional to the speed of sound in the medium times the spatial eccentricity (see sec.1.3.2).
1.2 Initial Conditions The usual tool to describe the initial state of a heavy ion collision is a Glauber model [20]. For a given pair of colliding nuclei with atomic number A and B (usually called target and projectile), the Glauber model provides a way to calculate the number of nucleonnucleon interactions and the geometry of the overlap region as a function of the impact parameter b (see fig.1.5).
Glauber calculations can be either optical [20], where nucleon positions are ap proximated by a smooth distribution (the number of participants is proportional to the geometrical overlap of the two nuclear density functions), or Monte Carlo, where the nucleons are pointlike centers randomly distributed inside the nucleus and the probability of each interaction is calculated inside the overlap region pro portionally to the nucleonnucleon cross section [21]. The two approaches lead to similar results over a large range of impact parameters, being different only for the most central and most peripheral collisions [21]. For extremely peripheral collisions (b ≥ 2RA) the optical Glauber approach does not provide a good parametrization of the physics of the process, which is then dominated by the random occurrence of single nucleonnucleon interactions.
However, the study of fluctuations in a Glauber Monte Carlo approach was be yond the purpose of the present thesis (see sec.1.2.1). The extrapolations developed in the following sections are done using the optical Glauber approach.
The Glauber calculation starts with a parametrization of the spatial distribution of the colliding nuclei (defined as the probability to find a nucleon at the radius r), which is given by a WoodSaxon profile:
ρA(r) = ρ0
e(r−RA)/ξ + 1 , (1.2)
10 Heavy Ion Collisions & Anisotropic Flow
where RA is the radius of the nuclei with atomic mass A and atomic number Z (the same radius is taken for protons and neutrons), ξ is the nuclear surface diffuse ness, and ρ0 is a normalization factor. The distribution of protons and neutrons are normalized separately in such a way that
∫ ρp(r)d~r = Z and
∫ ρn(r)d~r = A− Z.
In the present calculations the colliding nuclei are 20882 Pb and the parameters of the nuclear density distribution (eq.1.2) have been taken from literature (nuclear data [22]): the radius is RA = 6.621± 0.02 fm, and the nuclear surface diffuseness ξ = 0.551± 0.01 fm.
The nuclear thickness function is defined as the optical path through the nucleus along the beam direction (z):
TA(x, y) =
∫ −∞ −∞
ρA(x, y, z)dz . (1.3)
The transverse coordinates for a Glauber calculation are shown in fig.1.5, the x is oriented in the direction of the impact parameter b and y is the direction perpen dicular to it.
b
A B
y
x
Figure 1.5. Coordinate system of a noncentral collision, used for the Glauber calculation. The impact parameter b is the distance between the centers of the two nuclei.
In noncentral collisions, the probability of each binary nucleonnucleon inter action in the transverse plane is given by the product of the thickness functions of the two nuclei A (transversally shifted by the impact parameter b) times the total inelastic nucleonnucleon cross section σNN :
PBC(x, y;b) = TA(x+ b/2, y)TB(x− b/2, y)σNN . (1.4)
The energy dependence of Glauber calculations is determined by the nucleon nucleon inelastic cross section σNN(
√ s), which is extrapolated from existing pp
1.2 Initial Conditions 11
and pp¯ data including the highest energy Tevatron pp collision (see the current Review of Particle Physics [23] or the PDG website [24]).
According to the value 4 used in the ALICE PPR [25] the nucleonnucleon in elastic cross section, for PbPb at a collision energy √sNN = 5.5 TeV, has been set to σNN = 60 mb.
However, the main ingredients of the extrapolations given in sec.1.3 are not very sensitive to the chosen value of the cross section (see below).
x (fm) 15 10 5 0 5 10 15
y (fm
)
15
10
5
0
5
10
15
0
0.5
1
1.5
2
2.5
3
3.5
0
0.5
1
1.5
2
2.5
3
3.5
WNN
x (fm) 15 10 5 0 5 10 15
y (fm
)
15
10
5
0
5
10
15
0
2
4
6
8
10
12
14
16
18
0
2
4
6
8
10
12
14
16
18 BCN
Figure 1.6. Transverse picture of the density distribution of wounded nucleons NWN and binary collision NBC (arbitrary scale) in the optical Glauber calculations, for an impact parameter b = 7 fm.
The total number of binary nucleonnucleon collisions is obtained integrating over the transverse plane (fig.1.6(b)):
NBC(b) = ∫
TA(x+ b/2, y)TB(x− b/2, y)σNNdxdy , (1.5)
where the x axis is oriented in the direction of the impact parameter b and y in the perpendicular one.
The number of ‘wounded nucleons’ is defined as the number of nucleons par ticipating to the production process with at least one collision, and is given by the integral [27] (fig.1.6(a)):
NWN(b) = ∫
TA(x+ b/2, y) ( 1−
( 1− σNNTB(x− b/2, y)
B
)B) +
+TB(x− b/2, y) ( 1−
( 1− σNNTA(x+ b/2, y)
A
)A) dxdy . (1.6)
4The value given in the ALICE PPR (pag.1583 of [25]) is σNN = 57 mb. Other references quote a higher cross section at the same collision energy [26], however, the actual value of the nucleon nucleon inelastic cross section at LHC remains an open issue
12 Heavy Ion Collisions & Anisotropic Flow
For the symmetry of the system (Pb+Pb), in our calculation TA = TB . The two panels of fig.1.6 show the density distributions of wounded nucleons
and binary collisions. Depending on the choice of one or the other, the impact parameter dependence of the the geometrical quantities (such as spatial eccentricity and transverse area) changes significantly (see fig.1.8).
Figure 1.7 shows the impact parameter dependence of the number of binary collision (NBC) and the number of participants to the reaction (wounded nucleons NWN ). As we see, only the number of binary collisions strongly depends on the choice of the nucleonnucleon cross section, while the number of participant is af fected at a level of 1%.
The impact parameter range has been limited to 0 < b < 15 fm (see fig.1.7), where the upper limit is consistent with almost no interactions 〈NBC〉 ≃ 0.
b (fm) 0 2 4 6 8 10 12 14
W N
N
0
100
200
300
400
b (fm) 0 2 4 6 8 10 12 14
B C
N
0
500
1000
1500
2000
2500
Figure 1.7. Impact parameter dependence of the number of wounded nucleons (left) and the number of binary collision (right), calculated with respect to the impact parameter b from 0 to 15 fm. The continuous band represent the 3σ uncertainty on the nuclear radius and width (see above), while the dashed band represent the uncertainty due to the particular choice of the cross section (the upper and lower lines are produced with σNN = 90 and 40 mb respectively).
The Glauber model also provides the geometry of the overlap region, parame trized by the spatial eccentricity and the transverse area of the overlap (both used in fig.1.4). There are different ways to define these geometrical quantities, depending on the chosen distribution (weighting function) used to compute the averages over the x and y coordinates, e.g. geometric overlap, wounded nucleons or binary col lisions (the reference [28] gives a few examples of the procedure). Another option attempted in more recent developments makes use of the Color Glass Condensate (CGC) initial conditions, which leads to larger eccentricities and therefore higher flow values [29, 30].
The spatial eccentricity ǫ is defined in terms of the RMS of the distribution projected on the x and y axes (σx = 〈x2〉 − 〈x〉2, σy = 〈y2〉 − 〈y〉2):
ǫ ≡ σ 2 y − σ2x σ2x + σ
2 y
. (1.7)
1.2 Initial Conditions 13
The transverse area of the overlap S (also used in fig.1.4) is defined as: S ≡ πσxσy, (1.8)
where σx is the variance along the direction of the impact parameter b, and σy the variance on the perpendicular to it.
b (fm) 0 2 4 6 8 10 12 14
ε
0
0.1
0.2
0.3
0.4
b (fm) 0 2 4 6 8 10 12 14
St
0
5
10
15
20
25
30
Figure 1.8. Impact parameter dependence of eccentricity ǫ (left) and transverse area S of the overlap region (right), for both the density of wounded nucleons (+) and binary collisions (×). The plots are produced with the same Glauber calculation of fig.1.7 (i.e. 30 steps in impact parameter from 0 to 15 fm).
Fig.1.8 shows the impact parameter dependence of the collision geometry (ǫ and S) obtained from the optical Glauber calculation, of the distribution of wounded nucleons and of binary collisions (the integral along the transverse plane of eq.1.6 and eq.1.5 respectively). Both the distributions have a physical meaning, and the choice of one or the other will be discussed in sec.1.3.
The initial energy density in the transverse plane depends only on the thickness functions TA and TB (eq.1.3) and is defined as:
E(x, y) = f(TA(x, y), TB(x, y)) , (1.9) where f is a function that depends on the initial assumptions (different approaches can be found in literature [31]).
Early thermalization is assumed, giving all the available energy thermalized in a Lorentzcontracted volume [32].
1.2.1 Eccentricity in Glauber MC Recent developments, mainly due to the fluctuations observed in v2 [33], suggest 5 a different definition of eccentricity: the eccentricity of the participants ǫpart, in which the fluctuations in the position of the participants (wounded nucleons) is explicitly taken into account [34].
5The obvious assumption is that elliptic flow follows the initial eccentricity of the system.
14 Heavy Ion Collisions & Anisotropic Flow
The total number of collisions does not just depend on the geometrical overlap of the two nuclei, but has a probability proportional to σNN . Due to this, the spatial distribution of the nucleons that are actually participating to the reaction may have a slightly different shape than the geometrical overlap. The effect is much more pronounced in peripheral collisions, where the overlap region (and its thickness) is small and the randomness of binary processes dominates.
Therefore, the ellipse created by participating nucleons may be rotated with respect to the geometrical overlap, so that the minor axis is not oriented along the impact parameter vector ~b (see fig.1.9).
y
x
x’
y’
Figure 1.9. Schematic view of a collision of two identical nuclei in the transverse plane. The x and y axes are drawn in the standard way, with x oriented in the direction of the impact parameter b. The circles indicate the positions of wounded nucleons (participants). Due to fluctuations, the interaction region is shifted and tilted with respect to the standard (x, y) frame, leading to a spatial distribution which is better approximated by an ellipse along the x′ and y′ axes.
The eccentricity of the participants ǫpart can be defined with respect to the stan dard x and y axes (or any other cartesian system in the transverse plane) as:
ǫpart ≡
√ (σ2y − σ2x)2 + 4(σ2xy)2
σ2x + σ 2 y
, (1.10)
where σxy = 〈xy〉 − 〈x〉 〈y〉. Note that this expression is reduced to eq.1.7 if the elliptic distribution of the participants has the same direction as the geometrical overlap. Eccentricity fluctuations should also be taken into account, especially in very peripheral events [16, 33, 35]. However, including these effect would have required a Glauber Monte Carlo approach and the development of software tools
1.3 Medium Properties 15
that were not yet available for ALICE. Therefore, in the following, ǫ always refers to the geometrical eccentricity as defined by eq.1.7.
1.3 Medium Properties In the final state particle spectra, both a thermal and an anisotropic collective com ponent can be observed. The first is due to the thermal motion of the particles in the hot dense system created in the collision, the second is a radial boost due to the asymmetry of the system.
A thermalized medium
The thermal motion of the system’s constituents is observed in the transverse mo mentum spectra of the final state particles. The low pT component of the observed dN/dpT distribution approximately follows a Boltzmann blackbody spectrum [8]:
dN
dpT
∣∣∣∣ y∼0
∝ 1 (e
pT−µB Tapp ± 1)
, (1.11)
where µB is the baryon chemical potential, a parameter which accounts for the energy needed to produce the hadrons.
The radial boost, due to the expansion of the system (responsible for the blue shift in the final spectra), can be incorporated into the phenomenological parameter apparent temperature (Tapp [36]), which is expressed in terms of the transverse flow velocity vT [37]:
Tapp = Tf.o. + 1
2 m 〈vT 〉2 . (1.12)
The freezeout temperature (Tf.o.) quantifies the thermal motion of the cons tituents just before the kinetic freezeout, when the system decouples and all the particles propagate as free streaming. The transverse flow contributes to the tem perature proportionally to the mass of the particle (heavier particles moving at a fixed velocity carry a higher momenta), therefore a multiple fit of identified particle spectra allows to disentangle the two components [38, 39].
However, the distribution of eq.1.11 does not properly reproduce the long tail at high pT observed in experimental data, which is dominated by nonthermal pro cesses (hard scattering, recombination [40]). To better reproduce the data, the input distribution used in the GeVSim simulations presented in chapter 5 is a phenomeno logical functional form inspired by the Levy distribution [41] (see sec.5.3 for the details).
Particle ratios
Assuming that the hadronization process occurs in an equilibrated system composed of noninteracting hadron resonances, hadron yields can be described by a thermal
16 Heavy Ion Collisions & Anisotropic Flow
distribution calculated in a grand canonical ensemble [42]. The relative abundances of hadron species are interpreted in term of statistical hadronization [43, 44]:
ni = g
2π2
∫ ∞ 0
p2
e(Ei(p)−µi)/Tch ± 1dp , (1.13)
whereEi = √ p2i +m
2 i , and µi is the chemical potential for the creation of a particle
i = π,K, .... Eq.1.13 gives the yields at the freezeout time, shortlived particles and resonances need to be taken into account separately to correctly reproduce the particle ratios observed in the final state.
The success of the above distribution in describing RHIC data supports the as sumption that the system is in local thermal equilibrium when the hadronization process takes place, at the chemical freezeout. The chemical freezeout represents the end of inelastic processes changing the chemical composition of the system and it occurs at an earlier time than the kinetic freezeout, which is driven by elastic processes (and thus at a hotter temperature Tch ≃ 177 MeV [45]).
The observed thermal dN/dpT spectra and the success of statistical hadroni zation in describing the observed particle yields support the assumption that the system is in (local) thermal equilibrium, and if the system is in local equilibrium then it could show hydrodynamic behavior [8].
In noncentral collision the dN/dφ distribution is azimuthally anisotropic (see also sec.1.1). This is a phenomenon that has been observed in heavy ion experi ments over a wide range of energy, an event plane can be determined on an event basis defining the favorite direction of radiated particles.
The theoretical efforts in interpreting the data collected at RHIC contributed to a robust description of the system in term of relativistic hydrodynamic [18]. However questions are still open, especially with respect to the initial conditions.
Other descriptions are the low density limit approximation (LDL) [17], and numerical implementations of RQMD (Relativistic Quantum Molecular Dynamic) [46–48]. These theoretical models to describe flow also provide the tools to make extrapolations to LHC energies. Extrapolations based on LDL and relativistic hy drodynamic will be described in the following subsections (see sec.1.3.1 and 1.3.2). The RQMD model has not been considered in the present thesis because it already produces too little flow with respect to RHIC data.
The charged particle multiplicity is calculated from the number of wounded nu cleons using a saturation model for particle production (see sec.1.3.3 for the details).
1.3.1 Low Density Limit Figure 1.4 shows the linear increase of v2/ǫ with the multiplicity (entropy) density ( 1 S dN dy
). A simple extrapolation to LHC can be done by performing a linear fit on the existing data in a range where they appear to be linear (i.e. from 1
S dN dy & 5). The
1.3 Medium Properties 17
fit is justified by the LDL model (see eq.1.14), but is extended much above the ‘low density’ domain.
The Low Density Limit is a perturbative approximation which describes the first correction to free streaming [36] [49]. It is valid when the particle mean free paths (λi ≃ 1/(σρ), where σi is the cross section of particle i = π,K, ... and ρ is the particle density) are larger than the transverse dimensions of the overlap zone.
Under this assumption, particles can escape from the collision zone almost with out interacting, and the system behavior is close to free streaming (collisionless limit). The first order correction to free streaming is calculated from particle colli sions. Particles initially are produced azimuthally symmetric in momentum space but not in coordinate space, and the interactions with comovers produce an azimu thally asymmetric momentum distribution because of the (azimuthal) spatial asym metry of the source.
The starting point is the initial condition at formation time. Subsequent scatte ring between comovers are described by inserting a collision term into to the free streaming distribution function. The first order correction is calculated as the devi ation from cylindrical symmetry, which directly leads to the magnitude of elliptic flow v2 for the particle species i = π,K, p, ... (see the reference [36]):
vi2 = ǫ
16πσxσy
∑ j
⟨ vijσijtr
⟩ dNj dy
v2i⊥
v2i⊥ + ⟨ v2j⊥ ⟩ , (1.14)
where vi is the velocity of the particle, vj of the scatterer (what is used is the trans verse velocity v⊥ w.r.t. the reaction plane), and vij is their relative velocity. The averages 〈..〉 are taken over the scattered momenta pj . Since it is the momentum transfered in the collisions that deforms the momentum distribution, σijtr is the mo mentum transport cross section (i.e. the cross section averaged over energy and scattering angle [36]).
From eq.1.14 the elliptic flow is proportional to the eccentricity of the overlap region ǫ, and it vanishes for an azimuthally symmetric source (central collisions).
The integrated value of v2 of this linear extrapolation is calculated as:
v2 ≃ ALDL ǫ S
dN
dy +BLDL . (1.15)
with ǫ and S given by the density of nucleons participating to the reaction in a Glauber calculation (eq.1.7 and 1.8 respectively).
The coefficient ALDL = 0.00614 ± 0.0001 and the constant BLDL = 0.051 ± 0.002 are obtained from a linear fit of the highest energy RHIC data 6 (see fig.1.4). The fit has been restricted to only one set of data points because the scaling is not perfect (see the discussion in sec.1.1).
6The fit only includes data points from AuAu collisions at √sNN = 200 GeV (i.e. Npts = 9, χ2/DoF ≃ 9).
18 Heavy Ion Collisions & Anisotropic Flow
1.3.2 Relativistic Hydrodynamics Relativistic hydrodynamic is a classical calculation, which describes the system in terms of volume elements of a relativistic fluid [18, 31]. Each ‘fluid cell’ x is characterized by its energy momentum tensor:
T µν = (E(x) + p(x))uµ(x)uν(x)− p(x)gµν , (1.16) where p and E are the pressure and the energy density of the fluid cell, and uµ = γ(1, vx, vy, vz) is the flow velocity.
The evolution is ruled by conservation laws. Local conservation of energy and momentum are expressed by the equations:
∂µT µν = 0 , (ν = 0, 1, 2, 3). (1.17)
Since the fluid is made of quanta, it carries few conserved charges Ni (such as electric charge, baryon number, strangeness, etc.), with charge density ni(x) (i = 1, ..,M ) corresponding to charge currents densities jµi (x) = ni(x)uµ(x). Charge conservation is expressed by the equations:
∂µj µ i = 0 , (i = 1, ..,M). (1.18)
Hydrodynamics implies the concept of thermodynamics, in particular an equa tion of state (EoS) of the system is needed to close the system of differential equa tions. The above picture provides a set of 4 +M differential equations, involving 5 +M undetermined fields:
• 3 independent components of the flow velocity uµ(x), • the energy density E(x), • the pressure p(x), • and the M conserved charge densities.
This set of equations is closed by an equation of state which relates the local ther modynamic quantities p and E (see fig.1.2(b)).
The EoS of strongly interacting particles can, in principle, be calculated by latticeQCD (see fig.1.1). However those calculations are technically difficult and still lead to large uncertainties [4]. An alternative is to model the system of nuclear matter as a noninteracting gas of hadronic resonances [50].
If the relaxation rate is not fast enough to ensure an almost instantaneous ther malization, the energy momentum tensor and charge current densities must be ge neralized including dissipative effects (e.g. shear viscosity [51]). The goal of this approach is to provide a more accurate description of heavyion collisions by taking into account the deviation from an ideal fluid. First order viscous corrections have been derived [52,53], however the actual value of the viscosity in a hot QGP is still
1.3 Medium Properties 19
x (fm)
y (fm
)
5 0 5 5 0 5 5 0 5 5 0 5
2 fm/c 4 fm/c 6 fm/c 8 fm/c Time
b
z
5 0 5
5
0
5 b = 7 fm
0 fm/c
Figure 1.10. Time evolution of the transverse energy density profile from hydrodynamic calculations [18]. As the system expands anisotropically, the initial eccentricity vanishes.
controversial. A universal lower bound on the viscosity to entropy ratio has been proposed in connection with blackhole physics: η/s > ~/4π [54], while a recent study of elliptic flow at RHIC suggests that the magnitude of viscous correction is significantly higher than this lower bound η/s ≃ 0.11 to 0.19 ~ [55].
The ratio between elliptic flow and the spatial eccentricity of the overlap pa rametrizes the speed at which a perturbation is propagated through the system. In the hydrodynamic picture, this ratio is proportional to the square of the velocity of sound in the medium: v2/ǫ ∝ c2s. The velocity of sound is defined as: c2s ≡ dPdE . Different equation of states lead to different relations between the pressure and the energy density (see fig.1.1), and therefore to different values of cs [56].
The spatial anisotropy appears in the early stage of the collision and it is self quenching (see fig.1.10), however the elliptic flow v2 is conserved during the whole evolution of the system, and therefore carries information on the initial condition [18].
A simple extrapolation can be made by assuming the ratio v2/ǫ to be constant with respect to the centrality, which is approximately true up to very peripheral collisions [56].
A lower limit on v2 is given by the equation of state of a quark gluon plasma which undergoes a soft transition to the hadronic phase (EOS Q). The value of c2s has been chosen at the limit of nonrelativistic regime cs =
√ 0.22 [57]. Ac
cording to the initial condition used in [57], the eccentricity is calculated from the entropy density distribution which is proportional to the density of wounded nu cleons. Therefore the values of v2 versus centrality are obtained scaling c2s by the eccentricity of the wounded nucleon distribution (v2(b) = 0.22× ǫWN(b)).
For the upper limit, the equation of state of an ideal gas of massless fermions has been chosen (EOS I), giving P = E/3 (and cs =
√ 1/3) [31, 56]. In this
case, the eccentricity has been calculated from the density of binary collisions ǫBC (proportional to the initial energy density [18]), which has on average the same magnitude of ǫWN(b) but a slightly different centrality dependence (see fig.1.8(a)). Elliptic flow versus centrality is obtained as v2(b) = 13 × ǫBC(b).
20 Heavy Ion Collisions & Anisotropic Flow
1.3.3 Charged Multiplicity In order to estimate the centrality dependence of elliptic flow in PbPb at
√ s = 5.5
ATeV, the charged multiplicity at midrapidity (dNch/dyy∼0) must be also extra polated.
The final particle multiplicity is calculated from the number of participants to the reaction (wounded nucleons) [58], which dominates the ‘soft’ component 7 of the final spectra [59].
The model chosen in the present thesis and is a saturation model for particle production in the soft pT region, extrapolated from leptonproton collisions (see the reference [58]).
WNN 0 50 100 150 200 250 300 350 400
/2 W
N dN
/N
0
2
4
6
8
10
b (fm) 0 2 4 6 8 10 12 14
/d y
ch dN
0
500
1000
1500
2000
Figure 1.11. (a) Number of produced particle per wounded nucleon with respect to the number of wounded nucleons NWN . The LHC prediction (upper band) is calculated from eq.1.19, for comparison, the fit of three different sets of RHIC data is also shown (√sNN = 19.6, 130 and 200 GeV [58]). (b) Charged multiplicity per unit rapidity with respect to the impact parameter at LHC (PbPb at √sNN = 5.5 TeV). Values are calculated using the number of wounded nucleons NWN , obtained from the Glauber calculations (eq.1.6), into equation 1.19.
The main assumption of this approach is the geometric scaling of hadrons pro duced at small xBj observed in leptonproton data at HERA. Over a wide range of Bjorken x and Q2, the x dependence can be expressed by the saturation momentum Q2sat(x), so that the data are described in terms of a single variable Q2/Q2sat(x). By adding a nuclear dependence in the definition of the saturation momentum, Q2sat,A ∝ AαQ2sat, the model perfectly works in fitting RHIC and SPS data at diffe rent beam energies, and can be easily extrapolated to LHC [58].
The multiplicity of newly produced (charged) particles per participant, with re spect to the collision energy√sNN , incorporates the Q2sat dependence in the Golec
7The term soft refers to the low pT part of the spectra (pT . 1 GeV/c), in contrast with the hard component, which refers to hard scattering processes leading to jets and high pT observables. In heavy ion collisions, the soft component mainly consist of thermalized particles, where the therma lization is a consequence of the multiple scattering in the medium.
1.3 Medium Properties 21
Biernat and Wusthoff (GBW) parameter λ [60]: 1
Npart
dNAAch dη
∣∣∣∣ η∼0
= N0 √ sλN
1−δ 3δ
WN , (1.19)
where δ = 0.79 ± 0.02 is the fit parameter, and N0 = 47/2 is the overall normal ization [58]. The GBW parameter (for R20 = 1/Q2sat = (x¯/x0)λ in GeV−2 and x0 = 3.04 · 10−4) is λ = 0.288 [60].
Combining eq.1.19 with the number of participants from the above Glauber cal culations, it is possible to estimate the impact parameter dependence of the charged multiplicity at midrapidity at LHC (see fig.1.11(b)).
η/dchdN 0 200 400 600 800 1000 12001400 1600 18002000
2 v
0
0.02
0.04 0.06
0.08 0.1
0.12 0.14
hydro2 LDL hydro
Figure 1.12. Integrated elliptic flow 〈v2〉 versus charged multiplicity at midpseudorapidity (∼ event centrality), extrapolated to LHC with the LDL approximation (the most symmetric band) and with the hydrodynamic parametrization, with c2s = 0.33 and 0.22 (the upper and lower band respectively). The uncertainties are calculated by propagating the 3σR + 3σw uncertainty (on radius and width) from the nuclear data [22] to the calculated eccentricity and dNch/dy. The uncertainty of the LDL extrapolation also includes the errors on the linear fit (see fig.1.4).
Figure 1.12 shows the centrality dependence of the integrated elliptic flow as a function of the charged multiplicity at midrapidity for the three extrapolations presented above: the more symmetric curve represent the linear extrapolation of the data in fig.1.4 (see sec.1.3.1), the other two curves are the upper and lower limit in the relativistic hydrodynamic approach, with c2s = 0.33 and 0.22 respectively (see sec.1.3.2).
The centrality classes and the exact values which have been used for the simu lations are listed in tab.5.1 in the analysis chapter.
22 Heavy Ion Collisions & Anisotropic Flow
More recent developments suggest a slightly different extrapolation of v2 as a function of centrality, which better describes RHIC data [14]. The extrapolation is still based on relativistic hydrodynamic, but it includes viscous deviations [61]. However, for time reason, the model has not been used in the present thesis.
1.3.4 Differential Flow In the hydrodynamic picture, a detailed comparison between different equations of state is achieved by looking at v2 versus the transverse momentum, for different particle species, in the lowpT region. In particular, the effect of a phase tran sition would be less pronounced for lighter particles such as pions compared to protons [62]: at the same collective flow velocity, heavier particles carry a higher momentum and therefore are less affected by the thermal motion (see fig.1.13).
Figure 1.13. Transverse momentum dependence of v2 for protons and pions [63]. The lines represent hydrodynamic calculations, assuming an EoS with (full) and without (dashed) phase transition.
Elliptic flow studies at RHIC show that by scaling both v2 and pT by the number of constituent quarks nq a universal curve is observed [45, 64], suggesting that the partons are the relevant degrees of freedom at least during the earliest stage of the system evolution when most of the elliptic flow is built up. The results for the most common mesons and baryons are shown in fig.1.14.
In the simulations performed for the present thesis, the differential shape of v2 versus pT has been parametrized as linearly increasing with pT up to its saturation value at pT = 2 GeV/c, after which v2(pT ) becomes flat [65]. The magnitude of the saturation v2 has been determined for each centrality class in such a way that the integrated 〈v2〉 (over the dN/dpT spectra of charged hadrons) are the extrapolated values shown in fig.1.12 (see sec.5.3 for the details).
Elliptic flow versus (pseudo)rapidity is assumed to be flat, in agreement with what is normally used in hydrodynamic calculations (v2 shows a plateau at y ∼ η ∼
1.4 NonFlow correlations 23
0 0.5 1 1.5 2 2.5
0
0.02
0.04
0.06
0.08
0.1
qT (GeV/c)/np
q /n
2v Λ and s
0 Fit to K
s 0
K
pi + pi− + Λ+Λ
p + p +Ξ+ −
Ξ
+Ω+ −
Ω
Figure 1.14. Elliptic flow v2 for identified particles scaled by the number of constituents quarks nq, plotted versus pT /nq [45].
0 in the interval η . 1 [66, 67]) and with existing RHIC data [12, 68]. On the experimental side this means that elliptic flow at η ∼ 0 is estimated by averaging the reconstructed v2(η) over a wide pseudorapidity interval, which in our case could be the entire acceptance of the ALICE central barrel detector (−0.9 < η < 0.9, see sec.2.1).
Since the aim of the present analysis is the measurement of elliptic flow of unidentified charged particles, no particle type dependence of v2 has been developed in the simulations.
1.4 NonFlow correlations The elliptic flow observed in the final state arises from the anisotropic expansion of the system, which is due to the initial azimuthal asymmetry of the collision along the direction defined by the reaction plane. Therefore the coefficient v2 quantifies the correlation between the directions of radiated particles and the orientation of the reaction plane.
However the reaction plane is not directly observable in a real experiment (the experimental methods to estimate its direction and the magnitude of elliptic flow will be described in chapter 3), what is measurable experimentally is the ‘event plane’, which is reconstructed from the azimuthally anisotropic particle distribution (see sec.3.2).
The reconstructed event plane approximates the real reaction plane just in case flow is the only source of azimuthal correlation. However, in real experiments, other physics phenomena can affect the spatial distribution of particles trajectories. Due to jet emission, resonance decays and momentum conservation, particles are mutu ally correlated with no respect to the orientation of the reaction plane. These effects
24 Heavy Ion Collisions & Anisotropic Flow
are summarized under the concept of ‘nonflow’, defined as azimuthal correlation between ktuples (i.e. pair, triplets, ...) of radiated particles.
Depending on the analysis method, nonflow effects introduce a systematic error in the flow measurement, and nonflow contributions at LHC energies represent a big uncertainty in the flow analysis at ALICE.
In the present thesis, nonflow effects have been simulated using Hijing (see sec.2.2.3), and part of the study has been devoted to characterize their source, and to compare their magnitude to the expected flow signal (defining, in such a way, the applicability limits of the event plane analysis). The details of this study and the analysis results are given in sec.4.1.
Chapter 2
Experimental Setup and Analysis Framework
ALICE (A Large Ion Collider Experiment [69]) is an experiment dedicated to study heavy ion collisions at the LHC (Large Hadron Collider [70]), located at CERN.
One of the main physics goals of ALICE (and the major issue of this thesis) is the measurement of ‘anisotropic flow’ in PbPb collisions (and in particular elliptic flow, see section 1.1). Flow is a collective phenomenon classified as ‘soft physics’, since its observation requires the ability to reconstruct and identify particles down to very low momentum. Besides soft physics the ALICE program will cover many other physics observables occurring in heavy ion collisions, e.g. jets, heavy quarks, direct photons, HBT interferometry, etc. This lead to the construction of a multi purpose detector combining different detection techniques.
In the first part of this chapter, the ALICE detector will be described, devoting more attention to the subdetectors directly involved in the flow measurement (see section 2.1). Since the LHC is not yet operational during the development of the present thesis, the analysis presented in the following chapters is entirely based on simulations. Therefore, the second part of this chapter will describe the simulation and analysis framework that has been use (section 2.2). The last section of this chapter describes the procedure for track reconstruction and particle identification implemented in the ALICE software framework, a prototype of the one that will be used during the real experiment (see section 2.3). The final output of the reconstruc tion algorithm is a data structure (the ALICE Event Summary Data) that constitutes the starting point of the flow analysis, as will be described in chapter 3.
2.1 The ALICE detector at LHC The heavy ion program at LHC, which is supposed to start after the first pp run, will collide the largest available nuclei at the highest possible energy (PbPb collision at√ sNN ≃ 5.5 TeV), and also explore different systems (pA, AA) at different beam
26 Experimental Setup and Analysis Framework
energies. The nominal luminosity of the LHC for PbPb collisions is 1 inverse millibarn
per second, i.e. an event rate of about 8000 minimum bias collisions per second. On average 5% of them will correspond to the most central events with a multiplicity of about 2000 charged particles per unit rapidity (see sec.1.3).
Figure 2.1. General layout of the ALICE detector [71]. For visibility, the HMPID detector is at 12 o’clock position instead of 2 o’clock position where it will actually be. For the meaning of the abbreviations refer to the text.
This low interaction rate together with the high multiplicity environment lead to the design of slow but highly granular tracking detectors. The soft physics domain requires a wide acceptance tracking device with low material density, immersed in a moderate magnetic field. In addition, particle identification over a wide momentum range is required, which implies the implementation of many different identification techniques (energy loss, time of flight, transition radiation, and Cherenkov light).
ALICE is a general purpose detector to measure and identify hadron, lepton and photons produced in the interaction, from very low to very high transverse mo mentum (100MeV/c < pt < 100GeV/c). It consists of a central detector system, designed to provide full tracking at midrapidity (−0.9 < η < 0.9) over the full azimuth, and several forward detectors.
The experimental setup of ALICE is extensively described in the ALICE Tech nical Proposal [72] and its addenda [73,74] and in the ALICE Physics Performance Report [71]. The detector systems are described in the various Technical Design Re
2.1 The ALICE detector at LHC 27
port (TDR [75–87]). The Trigger System is described in [88], the Data Acquisition System is described in the ALICEDAQ manual [89].
Tracking and particle identification in the central rapidity region relies on four separate layers of 2π coverage detectors (ITS, TPC, TRD and TOF) immersed in a uniform magnetic field of parallel to the beam axis. The ALICE experiment is designed to run with three possible configurations of the magnetic field, ~B = 0.2, 0.4 and 0.5 Tesla (value of ~B at the center of the ALICE solenoid). The magnitude of the magnetic field affects the transverse momentum acceptance, a stronger mag netic field gives a better resolution at high pT but worsens the efficiency at low pT . The current default value of the magnetic field is ~B = 0.4 T 1.
The detector arrangement (and in particular, the Inner Tracking System, see sec.2.1.1) provides high granularity close to the interaction point to reconstruct shortlived resonances, B and D mesons. The magnetic field is generated by the large solenoidal L3 magnet which contains the experiment (see fig.2.1).
The central system is complemented by a high momentum particle identification detector (HMPID [80]) which is a high resolution array of ringimage Cherenkov detectors (located at η < 0.6 with an acceptance of 57.6◦ degrees in azimuth). Pho tons are reconstructed in a high density crystal photon spectrometer (PHOS [81]) which covers a small η slice (η < 0.12) at midrapidity and 100◦ in φ. A fu ture upgrade of the experiment foresees an electromagnetic calorimeter (EMCAL) to be installed over 100◦ azimuthal degrees in the central rapidity region, to help identification of charged leptons and photons.
Muon detection is performed by a forward spectrometer, which covers a high pseudorapidity cone (−4.0 < η < −2.4) on the negative z side 2 of the central detector (MUON [83]). The muon spectrometer is equipped with an absorber for filtering out hadrons and photons from the interaction, a dipole magnet and two separate arrays of tracking chambers, before and after the dipole magnet, for muon momentum measurement.
To complement the central detection system, other detectors are used to char acterize the centrality of the events: a silicon strip forward multiplicity detectors (FMD [87]) for measuring charged particle multiplicity, and a preshower photon multiplicity detector (PMD [85]) for measuring the multiplicity and spatial distri bution of photons on an eventbyevent basis. They are located at the two opposite sides of the interaction point.
The fast trigger signal is provided by an array of scintillators and quartz counters close to the interaction point: the V0 and T0 detectors [88]. The T0 detector, with two arrays of Cherenkov counters placed on both sides of the interaction point, is particularly important because due to its fast response it is used to start the other detectors.
1This is the default setting of the release v404Rev14 of AliRoot (the one in use for the PDC06 production, including the simulations presented in chapter 5).
2In the laboratory frame, the z axis is defined by the direction of the beam beam pipe (see also sec.2.3).
28 Experimental Setup and Analysis Framework
About 100 meters away from the collision point, a ZeroDegree Calorimeter (ZDC [82]) uses both hadronic and electromagnetic shower to measure the energy carried away by non interacting nucleons (spectators 3). The ZDC consists of two distinct quartz fiber calorimeters, one for spectator neutrons, placed at zero degrees relative to the z axis, and one for spectator protons, placed externally to the beam pipe on the side where positive particles are deflected. In an ideal case, dividing the collected energy by the average energy per nucleon at LHC (i.e. 2.76 TeV/nucleon in a 208Pb beam), it would be possible to immediately estimate the centrality of the collision. In the real experiment not all the spectator nucleons can be detected.
The elliptic flow measurement described in this thesis requires full tracking over 2π of azimuthal coverage, which is the domain of the central barrel detector system. Here in the following, the four main components of the central system will be briefly described, their combined track reconstruction will be presented in section 2.3.
2.1.1 ITS From the interaction point, the first component of the ALICE detector is the Inner Tracking System (ITS [75]), six concentric layers of silicon detectors with a design based on three different Si techniques (fig.2.1.1).
Figure 2.2. Layout of the ITS detectors, showing the spatial arrangement of the three layer.
The position and segmentation are optimized for efficient track finding and for a high spatial resolution in a high multiplicity environment.
The high particle density (80 particles/cm2 at 4 cm from the interaction point) and the requirements on spatial resolution are the main reasons for choosing a Sili con Pixel Detector (SPD) for the innermost two layers. The following two layers are
3From the number of spectators Nspec, the number of participating nucleons can be calculated as Npart = A−Nspec, where A is the atomic number of the colliding nucleus.
2.1 The ALICE detector at LHC 29
a Silicon Drift Detector (SDD), and where the track densities becomes lower than one particle per cm2 (> 40 cm from the interaction point) there are two layers of doublesided Silicon Strip Detector (SSD). Both the SDD and the SSD layers have an analog readout for dE/dx measurement, which allows lowpT particle identifi cation using the BetheBloch model for energy loss.
Table 2.1. Essential details of the ITS detectors.
Detector Layer r (cm) ±z (cm) η σrφ (µm) σz (µm) Channels SPD 1 4.0 14.1 1.98 12 100 3.278.400
2 7.2 14.1 0.9 6.556.800 SDD 3 15.0 22.2 0.9 38 28 43.008
4 23.9 29.7 90.112 SSD 5 38.5 43.2 0.9 20 830 1.148.928
6 43.6 48.9 1.459.200
The ITS has a pseudorapidity acceptance of η < 0.9 for all vertices located within ±5.3 cm from the beam intersection. The first layer of the SPD has a larger pseudorapidity coverage (η < 1.98), so that this part together with the Forward Multiplicity Detectors (FMD) 4 provide a continuous coverage in rapidity for the measurement of chargedparticles multiplicity. Informations about the ITS is sum marized in tab.2.1.
The material budget of the ITS has been kept as low as possible (X/X0 ≃ 7% for perpendicular tracks) in order to maximize the efficiency at low momentum, a thick layer at very close distance from the interaction point would act as a shield, preventing tracks from entering the TPC.
2.1.2 TPC The ITS is surrounded by the large cylindrical volume of the TimeProjection Cham ber (TPC [76]), a conventional device in heavy ions experiments, successfully used already by NA49 and STAR.
The field cage has a total volume of about 88 m3 (making it the largest Time Projection Chamber ever built), and the detector is optimized for an extremely high multiplicity environment, safely overestimated as about 8.000 tracks per unit rapi dity (∼ 20.000 tracks in the whole TPC coverage [71]). This can be achieved at the rate of 400 minimum bias PbPb collisions per second (400 Hz), and up to 1 kHz for pp [71].
The TPC is the main tracking device in ALICE, track seeds start in the outer radius of the TPC (see sec.2.3). It has 2π azimuthal coverage and an acceptance
4The Forward Multiplicity Detectors measures chargedparticles multiplicity in the pseudorapi dity range −3.4 < η < −1.7 and 1.7 < η < 5.1.
30 Experimental Setup and Analysis Framework
E
E
88 µs
510 cm
Figure 2.3. Layout of the TPC, showing the orientation of the electric field toward the central membrane (e− drift to the endcaps).
η < ±0.9 for full radial tracking 5. For partial tracking (tracks not reaching the outer radius of the TPC) an acceptance up to η ∼ 1.5 is accessible.
The ALICE TPC is an ideal device for soft physics observables, the momentum resolution is estimated between 1% and 2% for lowmomentum tracks (100MeV/c < pT < 1GeV/c), depending on the magnetic field (see sec.5.1).
The material budget for the TPC is kept low to minimize multiple scattering and secondary particle production. Both field cage and drift gas are made of materials with small radiation length, the material budget of the TPC is 3.5% < X/X0 < 5% for track in the central rapidity acceptance (η < 0.9).
The field cage has a central highvoltage electrode that divides the TPC volume into two parts, and two opposite sets of axial potential dividers (18 field degraders, 1 per sector) to create a uniform electric field in both sides (fig.2.1.2).
The read out chambers are located on the two endcaps of the TPC cylinder, they are standard multiwire proportional planes with cathode pad readout, segmented in
5The TPC is a cylindrical volume with radius r = 2.47 m and elongation in the z direction lz = 5 m, centered around the beamcrossing point (0, 0, 0). Neglecting the displacement of the vertex in the transverse plane (which is of the order of few tens µm), the η acceptance of the TPC is given by η(z0) = − log (tan (θ(z0)/2)), where θ(z0) = tan−1
( r
lz−z0
) is the longitudinal angle under
which the TPC is seen from the interaction point. For events with main vertex at the center of the cylinder the TPC has a symmetric acceptance η ≃ 0.891.
2.1 The ALICE detector at LHC 31
Table 2.2. Synopsis of TPC parameters.
Pseudorapidity coverage −0.9 < η < 0.9 for full radial track length −1.5 < η < 1.5 for 1/3 radial track length
Azimuthal coverage 2π Radial position (active volume) 845 < r < 2466 mm Length (active volume) 5000 mm Segmentation 18 (φ), 2 (r), 2 (z) Pad rows 159 (63 inner pad + 96 outer pad) Material budget X/X0 = 3.5 to 5% for 0 < η < 0.9 Detector gas 88m3 of Ne/CO2 (90%/10%) Drift length 2× 2500 mm Drift field 400 V/cm Drift velocity, time v = 2.84 cm/µs , tmax = 88 µs Position resolution (σ) in rφ 1100 to 800 µm inner / outer radii
in z 1250 to 1100 µm dE/dx resolution, isolated tracks 5.5%
dN/dy = 8000 6.9%
18 sectors in φ with 2 readout chambers per sectors (inner chamber 84.1 < r < 132.1 cm, outer chamber 134.6 < r < 246.6 cm). In total there are 18× 2× 2 = 72 readout chambers, for a total of 159 radial pad rows.
The inactive areas between neighboring inner chambers are aligned with those between neighboring outer chambers to optimize the momentum precision for high momentum tracks, but has the drawback of creating dead zones in the azimuthal acceptance 6 (the detector is nonsensitive for about 10% in φ).
The analog readout of the TPC allows particle identification by dE/dx mea surement, both in the low momentum region, where the expected ionization for particle types is well separated, and at very high pT , due to the relativistic rise in the BetheBloch curve (see sec.2.3). Information about the TPC is summarized in tab.2.2.
6Each sector of the TPC covers ∼ 18◦ degrees in φ, with a gap between two neighboring sectors of ∼ 2◦. This results in ∼ 324◦ degrees of azimuthal coverage and ∼ 36◦ of dead area, located at φ = n× 18◦ ± 1◦.
32 Experimental Setup and Analysis Framework
2.1.3 TRD and TOF The TransitionRadiation Detector (TRD [77]) is located around the TPC (fig.2.1.3 (a)). It provides electron identification in the central barrel for momenta greater than 1 GeV/c by detecting the transition radiation (TR) produced by those particles in the radiator, i.e. the radiation produced by fast particles (with relativistic γ > 1.000) when crossing a boundary between two materials with different dielectric constants.
In the momentum range from 1 to 10 GeV/c, only electrons (and positrons) are highly relativistic due to their small mass. This process causes a larger release of energy in the detector material (due to TR photons), which allows to separate pions from electrons with a misidentification probability of less than 1%.
The TRD fills the radial space between the TPC and the TOF detectors and it also has 2π azimuthal coverage and a pseudorapidity acceptance η < ±0.9. The TRD consists of 6 individual layers, divided into 18 sectors to match the azimuthal segmentation of the TPC. In total there are 18 × 5 × 6 = 540 detector modules, made of a sandwich radiator and a multiwire proportional readout chamber.
Figure 2.4. (a) Cut through the TRD with the TPC inside. (b) TOF sector (supermodule), consisting of five modules inside the space frame which surrounds the TRD.
The last layer with 2π azimuthal coverage in the central barrel is the TimeOf Flight detector (TOF [78]). Its cylindrical surface covers the central pseudorapidity region (η ≤ 0.9) and provides particle identification in the intermediate momen tum range (from 0.2 < pT < 2.5 GeV/c, see sec.2.3).
The time of flight of detected particles is calculated by the delay between the fast trigger signal given by the T0 detector (minus a fixed tT0 = zT0/c) and the TOF signal. This allows particle identification by calculating the invariant mass
2.2 The OffLine Framework 33
with the relativistic formula: m =
ptot βγ
, (2.1)
where the relativistic γ = 1√ 1−β2
, and β is calculated from the TOF signal as: β = ltrk/c × tTOF (ltrk is the length of the track, calculated from the track fit and c is the speed of light). The invariant mass obtained in this way is used to compute the probability of the particle to be of a specific type.
The modular structure of the TOF corresponds to 18 sectors in φ (matching TPC and TRD) and to 5 segments in z (fig.2.1.3(b)). Each TOF module is a Multigap ResistivePlate Chamber (MRPC [90]), which can operate efficiently in extreme multiplicity conditions.
The electric field is high and uniform over the whole gas volume of the detector, any ionization produced by a charged particle passing through will immediately start an avalanche process which will eventually generate the observed signals on the pickup electrodes. There is no drift time associated with the movement of the electrons to a region of high electric field, therefore the time uncertainty of these devices is only caused by the fluctuations in the growth of the avalanche.
2.2 The OffLine Framework Since the analysis presented in this thesis are entirely based on simulated data, the following section describes the simulation and analysis framework in use at ALICE. The ALICE Offline framework, AliRoot, is a full experimental environment built on top of ROOT.
2.2.1 ROOT ROOT [91] is a widely accepted software framework for experimental highenergy physics that offers a common set of features and tools for many domains: genera tion of events, detector simulation, data reconstruction, data storage, analysis and visualization.
It was initially developed in the context of a heavy ion experiment (NA49 at CERN [92]) in year 1995 [93], following the new standards of ObjectOriented programming. The ROOT framework has rapidly taken over most of the old FOR TRAN tools still very popular, and has become an essential software of experimen tal particle physics.
Thanks to the objectoriented approach the system can be easily extended to other domains, e.g. interfaces for remote or distributed analysis (see sec.2.2.4), or the implementation of user defined macros and libraries (the AliFlow package is a good example, see section 3.3).
The builtin C++ interpreter (CINT [94]) provides the possibility to use both
34 Experimental Setup and Analysis Framework
C++ macros and compiled ‘shared object’ libraries 7. ROOT is in fact a versatile system that can be dynamically extended.
In the ALICE collaboration, ROOT has been adopted as the underlying system for data acquisition, simulation and analysis.
2.2.2 AliRoot and the ALICE Offline Project Many collaborations have developed their own ROOT based tools to better satisfy specific needs of the experiments. The STAR collaboration is an example of this approach with the implementation of the Star Class Libraries (SCL [96]). A more radical strategy has been adopted by the ALICE collaboration, giving birth to a complete experimental framework named AliRoot.
Brief History of AliRoot
A Geant3 based simulation program (gAlice [97]) was originally developed for the Technical Proposal of the ALICE experiment at LHC [72]. It was an objectoriented prototype for data reconstruction, mainly written in FORTRAN and built on top of existing Monte Carlo codes (such as GEANT [98] [99] and FLUKA [100]).
After the publication of the Technical Proposal (TP [72]) in 1995, simulations became an essential tool for the detailed design of the detectors and for the develop ment of the the Technical Design Reports (TRD [75–88])) for the various ALICE subdetectors. It was clear that a substantial upgrade of the gAlice package was nec essary. A second version of gAlice was quickly prototyped, still using the ‘Geant3’ simulation program (in FORTRAN) but completely wrapped into a C++ class. This rapid prototyping was possible thanks to the availability of ROOT as framework and to the active support of the ROOT team. The results of this activity was a sui table tool for simulations, which was using at the same time both the advantages of the ObjectOriented programming, and the robustness of the ROOT framework, the output of the simulations were persistent objects that could be stored on disks.
The official adoption of ROOT by the ALICE Offline Project was in November 1998. As a consequence, new C++ versions of the simulation programs started to be developed, together with the digitization and reconstruction code, that was now based on ROOT as a common framework. Since version 3, the name ‘AliRoot’ was adopted and the simulation and reconstruction code was completely rewritten in C++.
The version of AliRoot that has been used in the present thesis is the release v404Rev14. The entire framework is constantly under development [101].
7A ‘shared object’ library (with extension ‘.so’) is the standard format of dynamically linked libraries on the Linux platform, usually compiled with ‘gcc’ [95].
2.2 The OffLine Framework 35
The AliRoot Framework
AliRoot is a complete experimental framework to simulate, reconstruct and analyze heavy ion data in the ALICE environment.
Heavy ion collisions are simulated using a Monte Carlo event generator (see sec.2.2.3). Using the transport code from Geant [99] they are propagated through the detector response simulation packages, and transformed into digitized signals that match the detector layout of real reconstructed data.
The result of this process is the production of ‘raw data’, i.e. data representing the digitalized output of the ALICE detector, that can be submitted to the event reconstruction chain. Simulated data are then processed in the same way as data from the real experiment, the tracking algorithm fits the reconstructed space points (clusters) in each detector and calculates the particle trajectory. Analog detector signals are also associated to the fitted tracks, and the energy loss and the TimeOf Flight signals are used to calculate the Bayesian weights for particle identification (see sec.2.3.2).
A good feature of the transport code, as implemented in AliRoot, is that it keeps track of which simulated particles produced a signal in the sensitive volume of the detectors, by associating the particle’s label to every ‘hit’ produced in the detector. At the end of the simulation, the reconstructed tracks can be compared one to one to the original particles that have been simulated, and this is very useful for calculating the reconstruction efficiency and to optimize the analysis cuts.
The simulation process can be summarized in the following steps:
• Event generation: The collision is simulated by an event generator, which produces an array of finalstate particles with outgoing momenta, propagating from the main vertex of the collision (which can be set at any position along the beam intersection, see sec.5.1.4). This array is called ‘KineTree’, and it is a ROOT TTree structure containing a list of TParticles 8 with their complete kinematic (see sec.2.2.3).
• Particle transport: Particles emerging from the interaction are propagated along the direction of their momenta and the transport code (Geant [99]) simulates the interactions with the detector material (particle decays, parti cle scattering, ionization processes and energy deposition) by calculating the probability of random microscopic processes between the particle and the surrounding. Whenever a secondary particle is produced, it is added to the KineTree and transported as well, the transport code stops when a particle ex its the detector volume or when a low energy threshold is reached (the particle stops). During the transport process, the information contained in the TParti cle is lost and reduced to that generated by a particles crossing the detector.
8The ROOT TParticle class is meant to summarize the sensitive information of a physical par ticle, such as momentum, charge, mass, particle type. For more information see the online ROOT documentation [91]
36 Experimental Setup and Analysis Framework
• Detector response: The energy deposited in the detector is then translated in a detector response (a ‘hit’), according to the geometry of the detector and the implemented detection techniques (this is the ideal detector response).
• Digitization: The detector response is digitized and formatted according to the output of the frontend electronics and the data acquisition system (DAQ [89]), some smearing of the signal due to electronic noise is applied at this step. The resulting data closely resemble the real output that will be produced by the detector.
• Event reconstruction: The reconstruction algorithm fits the reconstructed space points to produce track candidates (AliESDtracks), and retrieves / cal culates all the sensitive information available (fit parameters, energy loss, p.Id. hypothesis), it also extrapolates the interaction vertex and reconstructs neutral decay vertices (see sec.2.3). Each reconstructed event is stored as an ALICE Event Summary Data object (class AliESD).
All the procedure is handled by AliRoot and can be executed at any time using some simple commands and a configuration script (to specify event generator, detector settings and reconstruction parameters). However, a full simulation requires a few hours of computing time, depending on the particle multiplicity and the number of detectors switched on.
2.2.3 Event Generators Since a full and complete description of the processes occurring in heavy ion col lisions has not been achieved yet, AliRoot incorporates a few Monte Carlo event generators, specifically implemented to simulate different physics observables.
The analysis described in this thesis made use of two different event generators (both available in the standard release of AliRoot), ‘Hijing’ and ‘GeVSim’. They will be briefly described in the following to subsections.
Hijing Hijing (Heavy Ion Jet INteraction Generator [102] [103] [104]) is a multipurpose heavy ion event generator implemented in FORTRAN and wrapped into a C++ class to be easily incorporated in the AliRoot framework. At the moment, Hijing offers a very good description of jets production in nucleus nucleus collision, incorporating all known physics effects from a superposition of multiple protonproton collisions, plus some parametrizations of soft physics observables. Its implementation is based on a perturbative QCD inspired model, where multiple minijet production is com bined together with Lund type model for jet fragmentation [105].
In highenergy nuclear interactions, and especially in relativistic heavy ion colli sions, the multitude of hard or semihard parton scatterings result in the production
2.2 The OffLine Framework 37
of an enormous amount of jets, and can be described in terms of perturbative QCD (pQCD). Minijets are expected to dominate the transverse energy production in the central rapidity region.
In Hijing, multiple interactions are calculated using Glauber geometry, and a pa rametrization of the parton distribution function for the nucleus is used to take into account parton shadowing. Jet quenching is modeled using a parametrized energy loss dE/dz of partons traversing the dense medium. The program uses subroutines of PYTHIA [106] to generate the kinematic variables of each hard scattering pro cess and the associated radiations, and JETSET [107] for string fragmentation. Due to its implementation in terms of pQCD, Hijing is only valid for collisions with center of mass energy √sNN above 4 GeV per nucleon, which makes it perfectly suitable for LHC collisions.
However, as a superposition of many pp collisions, Hijing events do not contain any collective effect such as anisotropic flow, while other typical heavy ion obser vables are added by adhoc routines (e.g. the jetquenching effect [108]). Another disadvantage of Hijing is the particle multiplicity, which is too large with respect to the current predictions for LHC (this problem can be taken care of by rescaling the centrality of the collisions, as it is done in sec.4.3).
In the present thesis, Hijing has been used to simulate the background of flow measurement (i.e. nonflow effects) arising from the presence of jetlike correla tions and resonance decays (see sec.4.1). In sec.4.3 and 5.4 collective flow has been added on top of the Hijing simulations by boosting the generated events with the flow AfterBurner (see below).
GeVSim and the flow AfterBurner
GeVSim [109] [110] is a fast and easy to use Monte Carlo event generator, based on the MeVSim [111] event generator developed for the STAR experiment (written in FORTRAN), and reimplemented in C++ for AliRoot.
It does not reproduce the physics of the heavy ion reaction, but simply radi ates user defined particle types out of the primary vertex, with a custom momentum spectrum parametrized with respect to pT , η and φ. The dN/dpT and dN/dη dis tributions can be expressed analytically or with user defined histograms, while the azimuthal distribution is described by two Fourier coefficients v1 and v2 (represent ing directed and elliptic flow, see eq.1.1), which can be expressed as functions of pT and η.
At the present time, GeVSim offers the simplest way to parametrize anisotropic flow in heavy ion events, just by introducing a modulation in the generated dN/dφ distribution with respect to the reaction plane angle. The Fourier expansion of the azimuthal distribution implemented in GeVSim is truncated at the second coeffi
38 Experimental Setup and Analysis Framework
cient, therefore the azimuthal anisotropy is parametrized as:
E d3N
dp3 =
1
2π
d2N
pTdpTdy [1 + V1(pT , y) cos (φ−Ψ) + V2(pT , y) cos (2[φ−Ψ])] .
(2.2) where φ is the azimuthal angle of the particles, Ψ is the reaction plane angle, and Vn(pT , y) (n = 1, 2) are the first and second Fourier coefficients.
The Fourier coefficients can be set separately for each particle type, and they can be constants or functions of pT or η. In particular, the parametrization used in this thesis is:
V1(pT , η) = 0 ,
V2(pT , η) =
{ vsat2 · pT/psatT if pT < psatT vsat2 if pT ≥ psatT
, (2.3)
with v2 (and therefore vsat2 ) assigned with respect to the centrality of the event (see sec.1.3) and psatT = 2 GeV/c. The event plane angle is generated with random orientation (as it will be in real collisions).
Events can be also produced with any other event generator and then boosted with the GeVSim ‘AfterBurner’, to add flow on top of an existing array of final state particles. The AfterBurner is applied to an existing KineTree, and it distorts the dN/dφ distribution according to the specified values of v1 and v2, with respect to an event plane angle that must be specified on an eventbasis.
In sec.4.3 and 5.4 Hijing simulated events have been boosted with the flow AfterBurner, in order to obtain ‘realistic’ heavy ion events with both collective flow and jetlike azimuthal correlations. However, the ‘boost’ is applied on top of the Hijing simulated event, where jet and strong resonance decays already took place (instead, weak and electromagnetic processes are performed in a later stage by the transport code), therefore in our procedure part of the nonflow effects is probably washed away.
The AfterBurner is fed with the same reaction plane generated by Hijing, which is distributed randomly over 2π in azimuth. The magnitude of v2 is calculated as a function of the impact parameter of the collision (after the proper rescaling) using to the hydro parametrization (see sec.1.3).
2.2.4 AliEn and LGC The large amount of data that is going to be produced by ALICE (and more in general by LHC experiments), requires very large storage and computing power. One month of PbPb collision in ALICE will produce roughly 1 Pbyte of data (1 PetaByte = 1, 000, 000 GigaBytes). Thus the construction of LHC required the parallel implementation of a computing infrastructure capable of dealing with such a huge amount of data.
2.3 Track Reconstruction in the Central Barrel Detectors 39
The LCG (LHC Computing Grid [112]) is a network based framework for dis tributing jobs and data over the resources available worldwide (both as CPUs and storage elements).
The ALICE OffLine collaboration has developed its own way to access this grid, the ALICE Environment (‘AliEn’ [113] [114]). Massive event simulations (e.g. Particle Data Challenges or PDCxx [101]) are currently produced through this environment, and during the real experiment the grid will provide the computing power for rawdata reconstruction and distributed analysis. AliEn provides a virtual file catalogue (to access distributed datasets) and different web services such as user authentication, job execution, file transport and performance monitor [115].
During the development of the present thesis, the LCG grid has been used to produce the simulations presented in chapter 5. Some effort has also been devoted to interface the flow analysis package to the AliEn environment, by the implemen tation of an AliFlowTask for the creation of AliFlowEvents from AliESDs, and their consequent analysis (see sec.3.3). In this way the job can be submitted to the grid through a ROOT task manager (the AliTaskManager) for distributed analysis.
2.3 Track Reconstruction in the Central Barrel De tectors
The central barrel detector system of ALICE mainly consist of tracking devices, charged particles going through leave discrete signal at the space points where they pass (‘clusters’), and a reconstruction algorithm fits these space points into track candidates to reconstruct the particle kinematics. This operation is called track reconstruction or tracking.
The combined track reconstruction in the central barrel system collects infor mation from the different subdetectors in order to optimize the track reconstruction performance (the details about the tracking procedure are described in chapter 5 of the ALICE Physics Performance Report [25]).
Reconstructed space points are represented in the global coordinate system of ALICE 9, with the z axis along the beampipe (oriented in the opposite direction with respect to the muon arm), the y axis pointing upward and the x axis to complete a righthanded cartesian system (it points outward with respect to the LCH circle). The origin is defined by the intersection of the z axis with the central membrane plane of TPC.
The track fitting algorithm uses the Kalman filter [116] [117], a general and powerful method for local trackfinding. Tracks are approximated with a ‘helix’ 10
9Note: this is the global coordinate system of the detector. On an eventbyevent basis, the origin of the coordinate system (which tracks and V 0 coordinates refer to) is located at the reconstructed position of the main vertex
10The helix perfectly describes the ideal trajectory of a charged particle moving in a uniform magnetic field, where the Lorentz force acts perpendicularly to the direction of motion.
40 Experimental Setup and Analysis Framework
and parametrized by a set of five parameters, such as the curvature and the angles with respect to the coordinate axes. The Kalman filter performs an iterative fit, by adding the space points found along the trajectory of the helix. The fit parameters are updated at any additional fit point (after some rejection criteria), improving at every step the quality of the fit. The method is suitable for simultaneous track recognition and fitting, and gives the possibility to reject incorrect space points ‘on the fly’. Moreover, the Kalman filter offers a natural way to extrapolate tracks from one detector to another (e.g. from the TPC to the ITS or the TRD). The reconstruction algorithm is fully integrated within the AliRoot framework, and uses the same detector classes involved in the simulation [101].
Track reconstruction is done in three passes:
1st) track finding and fitting inward from the TPC to the ITS: Tracking starts in the outermost pad rows of the TPC, where the space separation between tracks is the largest. Each track seed is calculated from different combinations of pad rows, with and without a primary vertex constraint. Track candidates are then propagated in the TPC using the Kalman filter, the fit continues to the ITS. After all the track candidates from the TPC are assigned to their clusters in the ITS, a special ITS standalone tracking procedure is applied to the rest of the ITS clusters to recover the tracks that were not found in the TPC because of the momentum cutoff, dead zones between the TPC sectors, or decays (however, ITS ‘tracklets’ produced in this way have not been considered in the present analysis).
2nd) the track is propagated outward and reconstruction is invoked for all central detectors: At the end of the first pass, an estimate of the track parameters and their covariance matrix 11 is obtained in the vicinity of the main vertex. The Kalman filter is then applied in the outward direction starting with the ITS, space points with large χ2 contributions are removed from the track fit. Once the outer radius of the TPC is reached, the precision of the track parameters is sufficient to extrapolate the tracks to the TRD, TOF, HMPID and PHOS detectors. Tracking in the TRD is done in a similar way to that in the TPC, tracks are followed till the outer wall of the TRD and the assigned clusters improve the momentum resolution further. Next, the tracks are extrapolated to the TOF, HMPID and PHOS, where they acquire the information for particle identification.
3rd) the track is refitted inward and the ‘best’ track parameters are calculated at the vertex: At last, all the tracks are refitted inward, from their outermost reconstructed space point to the primary vertex (or to the innermost possible radius, e.g. secondary tracks), to each track is associated the analog dE/dx signal coming from the clusters included in the fit, the TRD signal and the
11The covariance matrix of the Kalman fit is a 5 × 5 matrix representing the uncertainties of the fit and their correlation.
2.3 Track Reconstruction in the Central Barrel Detectors 41
Time Of Flight. Tracks that failed the final refit toward the primary vertex are labeled as secondaries and used for the reconstruction of secondary vertices (see section 2.3.3), tracks who succeeded are labeled as constrainable, and both constrained and unconstrained fit parameters are stored.
Reconstructed tracks are stored in an array of combined track objects (class AliESDtrack), and saved into the ALICE Event Summary Data file (class AliESD). Further information is added to the ESD by the reconstruction algorithm of each detector, e.g. primary vertex position (see sec.2.3.1), particle identification (see sec.2.3.2), reconstructed secondary vertices (see sec.2.3.3).
Within the geometrical acceptance of the central barrel detectors, combined track finding has an efficiency well above 90%. The momentum resolution of the combined tracking (in PbPb collisions) is estimated between 1 and 2.5% for trans verse momenta up to 10 GeV/c (see sec.5.1), the angular resolution ∆φ is ∼ 0.2 mrad or even lower at higher momenta (see section 5.1.6 of the ALICE PPR [25]).
The ‘Distance of Closest Approach’ (DCA) to the primary vertex, defined as the extrapolated minimum distance between the fitted helix and the interaction point, has a resolution that depends both on the spatial resolution of the primary vertex (see sec.2.3.1) and on the track precision in the proximity of the interaction point (therefore, the number of reconstructed space points in the ITS). In the case of Pb Pb collisions, where the main vertex is very well defined, the DCA resolution for tracks having 56 clusters in the ITS, is of the order of ∼ 100µm [25] (see also sec.5.2).
2.3.1 Reconstruction of the primary vertex The primary vertex constraint is used at various steps of the tracking procedure. The reconstruction of the primary vertex position is done using the information provided by the silicon pixel detector (SPD).
Collisions occur in the ‘interaction diamond’, parametrized as a wide Gaussian along the z axis (σz = 5.3 cm), with approximately the width of the beam in the xy plane (σx,y ≃ 15 µm to 75 µm, depending on the beam luminosity and lifetime [75]).
The primary vertex algorithm uses the z coordinates distribution of the recon structed space points in the SPD layers to find the centroid zcen around which the distribution is symmetric. When the primary vertex is moved away from the center of the detector (z = 0) an increasing fraction of hits will be lost and the centroid of the distribution no longer gives the primary vertex position, so the final position is calculated from the the correlation between the two centroids z1 and z2 found in the two layers. This procedure has been developed and validated on AliRoot simula tions, and gives a resolution σz ≃ 10 µm for PbPb collisions 12. A similar approach
12Due to the much lower particle multiplicity, in pp collisions the primary vertex is reconstructed using a different algorithm (which works in 3D). The achieved resolution of both σz and σx,y varies
42 Experimental Setup and Analysis Framework
is applied to the reconstruction of the vertex position in the transverse plane, giving a resolution σx,y = 25 µm [25].
The x, y and z coordinates of the primary vertex (in the global ALICE coordi nates system) are stored as an AliESDVertex object in the AliESD.
2.3.2 Particle identification Charged particle identification in the central ALICE detector system is done by combining all the information from ITS, TPC, TRD, TOF and HMPID. The particle identification in ALICE follows a ‘Bayesian’ approach [118], the most efficient way to combine information coming from different detecting systems that are efficient in complementary momentum subranges (see figure 2.3.2), and to combine signals of different nature (e.g. dE/dx, timeofflight, transitionradiation).
Figure 2.5. Detector efficiency for particle identification at different intervals of momen tum, from about 100 MeV/c up to a few GeV/c. The efficiency of the TPC can be extended up to tens of GeV/c, by measuring particle separation in the relativistic rise of dE/dx.
A good introduction to bayesian statistic can be found in the references [119] [120] [121]. The ‘Bayesian’ approach differs from the (standard) ‘frequentist’ ap proach in the definition of probability. In Bayesian statistics the probability is not defined as the frequency of occurrence of an event in a large set of repetitions of identical experiments (as frequentists do), but as the plausibility that a hypothesis is true given the available informations. The ‘probability’ in the Bayesian view is not a property of the random observable, but a quantitative encoding of our state of knowledge about these observables. The main consequence is that, in data analysis, the Bayesian approach can assign probabilities to hypotheses.
Charged particle identification in ALICE implements 5 hypothesis: e, µ, π, K and p (meaning both particles and antiparticles). Each detector class produces the between 50 and 150 µm, depending on the number of reconstructed tracks [25].
2.3 Track Reconstruction in the Central Barrel Detectors 43
conditional probability density function (or detector response functions) r(si) to observe a signal s when a particle of type i (i = e, µ, π,K, p) is detected. It is reasonable to assume that the functions r(si) reflect only properties of the detector and do not depend on other external conditions, like event and track selections.
The probability to be a particle of type i if the signal s is observed, w(is), depends not only on the probability density function r(si), but also on the amounts of this type of particles in the considered sample, i.e. the ‘a priori’ probability Ci to find the particle i in the detector. The quantities Ci (the relative concentrations of particles of type i) do not depend on the detector properties, but reflects the external conditions, like particle ratios and track selections. The underlying assumption of this approach is that Ci and r(si) are not correlated. The detector response function r(si) can be parametrized using available experimental data, e.g. for each track reconstructed in the TPC, r(si) (where s is the assigned dE/dx measurement) is a Gaussian with centroid 〈dE/dx〉 given by the BetheBloch formula and width calculated from simulated data.
The probability of each particle hypothesis is given by Bayes formula:
w(is) = r(si)Ci∑ j=e,µ,pi,... r(sj)Cj
. (2.4)
This method can be extended to combine P.Id measurements from several de tectors, considering the whole system of different contributing detectors as a single block. The combined P.Id weights W (is¯) are calculated in a similar way to eq.2.4:
W (is¯) = R(s¯i)Ci∑ k=e,µ,pi,...R(s¯k)Ck
, (2.5)
where s¯ = sITS, sTPC , sTRD, sTOF , ... is a vector of the signals registered in the various detectors, Ci are the ‘a priori’ probabilities to be a particle of the type i (same as in eq.2.4) and R(s¯i) is the ‘combined response function’ of the whole system of detectors.
The ‘a priori’ probabilities Ci must reflect the relative concentrations of parti cles of itype belonging to the sample of interest. In a simple approach Ci can be assumed to be equal for all i (i.e. same amount of e±, µ±, π±, etc.), however in many cases is possible to do better. For instance it is possible to start with equal ‘a priori’ probabilities for all particles, and update those number event by event with the detected particle ratios. This method has been successfully tested on AliRoot simulations and it shows that the ‘a priori’ probabilities quickly converge [122].
2.3.3 Secondary vertices Thanks to the good spatial resolution of the ITS, the ALICE central barrel detector is capable to reconstruct secondary decay vertices (V 0), cascade decays and kink topologies (i.e. a track deviating from its trajectory due to decay into a neutral plus a charged particle).
44 Experimental Setup and Analysis Framework
The V 0 finding algorithm is executed after the tracking procedure, and runs over the final AliESDtrack objects stored in the ESD. The algorithm starts with the selec tion of secondary tracks, e.g. tracks with a too large impact parameter with respect to the primary vertex. Each secondary track is combined with all the other sec ondary tracks of opposite charge, and different cuts are applied for the positive and the negative track impact parameters. With the helix track parametrization the min imum Distance of Closest Approach (DCA) between the two tracks is calculated, both in 3dimensions and in the transverse plane, pairs of tracks are rejected if their DCA is larger than a given value.
The reconstructed V 0 candidates are then stored in the AliESD as AliESDV0 objects. They can be included in the AliFlowEvent and submitted to the correlation analysis (see sec.3.3).
Chapter 3
Flow Analysis in ALICE
This chapter will give an overview of the flow analysis with the Event Plane Method [123], introducing the terminology and describing the strategy from the experimen tal point of view (sec.3.1 and 3.2). The chapter includes a description of the analysis code as it has been implemented for the ALICE environment (sec.3.3).
Other flow analysis techniques have been also developed (i.e. the Cumulants and the LeeYan zeros), and some of them are currently under implementation at ALICE. A brief overview will be given in sec.3.4.1, pointing out the main advan tages and disadvantages with respect to the event plane method.
3.1 Aim of the Flow Analysis As introduced in sec.1.1, in a non central heavy ion collision, the impact parameter b together with the z axis (the beamline) define the Reaction Plane (see fig.1.3). The azimuthal angle between the reaction plane and the plane x − z (measured in the lab frame 1) is called Ψtrue or ΨR (see fig.1.5).
Due to the geometry of the collision, the overlap region between the two nuclei has an initial spatial anisotropy. This causes an angular dependence of the pres sure gradient (which is larger along the smallest direction of the overlap, i.e. the direction of b) and therefore the evolution of the system follows an anisotropic ex pansion: more particles are radiated along the direction of the reaction plane. The asymmetry observed in the final momentum distribution of the radiated particles is called anisotropic flow (see sec.1.1).
A Fourier expansion of the Lorentz invariant distribution of outgoing momenta is the usual way to characterize anisotropic flow [123]:
E d3N
dp3 =
1
2π
d2N
pTdpTdy
( 1 +
+∞∑ n=1
vn(pT , y) cos [n(φ−ΨR)] ) , (3.1)
1In the laboratory frame z is the beamline direction, y is the vertical direction, and x is the third cartesian axis.
46 Flow Analysis in ALICE
where φ is the azimuthal angle of outgoing particles and ΨR is the reaction plane angle, both measured in the laboratory frame (see also eq.1.1).
The Fourier coefficients vn are then given by:
vn = 〈cos [n(φ−ΨR)]〉 . (3.2) where the average is taken over all particles of all events. For odd harmonics, vn changes sign between forward and backward rapidity because particle distributions are equal within two hemispheres ±y (or ±η in symmetric collision) but opposite in sign for global momentum conservation.
x
y
z
x
Elliptic Flow
Directed Flow
φ−Ψ 0 2pi
2ν 2
dN
dφ
b
Figure 3.1. Left: transverse picture of elliptic flow, projected on the transverse plane (x−y) and side picture of directed flow, projected on the beamvertical plane (z − y). Right: phy sical meaning of v2 as a modulation of the dN/dφ distribution with respect to the reaction plane Ψ.
We call the first Fourier coefficient v1 directed flow and the second coefficient v2 elliptic flow (see sec.1.1). Figure 3.1(a) gives an intuitive picture of these two ob servables, showing the effect of v2 on the transverse plane and the effect of v1 on the beamvertical plane. Fig.3.1(b) shows the physical meaning of v2 as a modulation of the azimuthal distribution dN/dφ with respect to the reaction plane.
Higher harmonics can also be studied, but their magnitude is much smaller. Recent studies have shown that the ratio v4/v22 is an important observable which provides information about the ideal fluid behavior of the system [124]. However, this thesis is devoted to the study of elliptic flow.
The method applied in the analysis is the Event Plane method, introduced by Danielewicz and Odyniec in 1985 [125] and generalized by Poskanzer and Voloshin [123]. It has been successfully used in many heavy ion experiment, from AGS to SPS and RHIC, and in particular by the STAR collaboration who wrote a specific software package (from which the present analysis code has been developed). The event plane method and its implementation in the ALICE environment are exten sively described in the following sections.
3.2 Event Plane Analysis method 47
The event plane analysis implemented for ALICE can been applied to identi fied/unidentified charged particle and to neutral strange particles (K0, Λ0) recon structed as neutral secondary vertices from their decay products. However, due to time limits and to continuous changes in the reconstruction framework, the analysis has been limited to unidentified charged particles (see chap.5).
3.2 Event Plane Analysis method The event plane method is a straight forward consequence of eq.3.2, with the only remark that the true (non observable) reaction plane of the collision is replaced by the experimentally reconstructed ‘event plane’.
Therefore, the first step of the analysis is the reconstruction (on an event basis) of the event plane Ψ from the anisotropy of the event itself.
The ‘observed’ event plane Ψobs, also called Ψn to emphasize the harmonic used in the calculation, approximates the true reaction plane ΨR and can be used as a replacement with the consequences of underestimating the true particleplane correlation, but this can be kept under control (see 3.2.1).
The procedure to extract Ψobs from the emitted particles starts with the recon struction of the flow vector, also called ~Q vector due to the original notation [123], defined for each event as:
~Qn =
(∑ iwi cos (nφi)∑ iwi sin (nφi)
) = Qn
( cos ( nΨobsn
) sin (nΨobsn )
) , (3.3)
where the sum includes all detected particles. However, since not all the particles have the same flow 2, weight coefficients wi
are there to enhance the contribution of particles with larger flow in order to make the ~Q vector a better defined observable. The choice of optimal weights will be discussed in section 3.2.3, anyway it is always possible to use wi = 1 for all the particles.
For the 1st harmonic event plane (which is used to study odd harmonic coeffi cients), the weights wi must have opposite signs in forward and backward rapidity for reflection symmetry 3.
The observed event plane angle of the nth harmonic is given by the orientation of ~Qn:
Ψn = 1
n arctan
( Qyn Qxn
) , (3.4)
by construction Ψn ∈ [ −pi n , pi n
) .
The flow coefficients vn are obtained from the correlation between ~Qn and the momentum of the emitted particles in the transverse plane. At the nth harmonic,
2E.g. the observed pT dependence of v2, see sec.1.3.4. 3In symmetric collisions, the particle distribution is equal but opposite in momentum around the
center of mass, and the average cos(φ) and sin(φ) with φ ∈ [0, 2π) is 0.
48 Flow Analysis in ALICE
this correlation is calculated by averaging the cosine of the difference between the azimuthal angle of the outgoing particle ψi and the event plane angle Ψn:
vobsn = 〈cos [km(φ−Ψm)]〉 . (3.5)
The average is taken over all the selected particles in all events, in the centrality class under study. What is measured in this way is the ‘observed’ flow vobsn , which magnitude is lower than the ‘true’ flow because in general Ψn 6= ΨR.
It is also possible to extract the event plane angle from any harmonic m and use it in the calculation of the flow coefficient vn, with n ≥ m and n = km for an integer k:
vobsn = 〈cos [km(φ−Ψm)]〉 . (3.6) In this way the sign of vn is determined relatively to Ψn, but the resolution deterio rates as k increases [123]. Due to the low sensitivity to v1 with the ALICE central barrel detector 4, this strategy has not been applied in the present analysis.
The difference between the true ΨR and the reconstructed Ψn gives the resolu tion of the event plane, i.e. the accuracy of Ψn to reproduce the true orientation of the reaction plane ΨR.
From the observed vobs2 , the corrected values of the flow coefficients are obtained as:
vn = vobsn
〈cos [km(Ψm −ΨR)]〉 = 〈cos [km(φ−Ψm)]〉 〈cos [km(Ψm −ΨR)]〉 . (3.7)
Following the prescription of the event plane method [123], it is possible to ex perimentally estimate the average 〈cos [km(Ψm −ΨR)]〉 using the subevents (see also sec.3.2.1).
3.2.1 Resolution The (fullevent) resolution of the event plane Ψn is defined as the cosine of the difference Ψn −ΨR. For known value of vn it can be calculated as [123]:
resfull = 〈cos [km(Ψm −ΨR)]〉 = √ π
2 √ 2 χme
− χ2m 4 ×
[ I k−1
2
(χ2m/4) + I k+1 2
(χ2m/4) ] ,
(3.8) where χm = vm/σ, and σ =
√ 1
2M 〈w2〉
〈w〉2 (choosing wi = 1 then χm = vm/
√ 2M , vm
is the true flow). M is the particle multiplicity used in the calculation of ~Q, and Ix are modified Bessel functions of order x.
Since the resolution deteriorates as k increases (eq.3.8), elliptic flow is mea sured better by using the second harmonic event plane Ψ2. Moreover, eq.3.8 is monotonically increasing with χm ∝ v2
√ M . This gives a good resolution for high
4The directed flow increase with rapidity [126] [34], therefore v1 is small in the range of accep tance of the present analysis (η < 0.9), giving a poor resolution on Ψ1.
3.2 Event Plane Analysis method 49
multiplicity and strong flow (midcentral events), and a poor resolution at low mul tiplicity (peripheral events) and weak flow (central events). See section 4.2.
The subevent method to calculate the resolution [123] splits the event into two separated equal multiplicity subevents. They can be randomly chosen or selected by positive/negative pseudorapidity 5.
For each subevent the subevent plane angle ΨAn is calculated in the same way as in 3.3 and 3.4:
ΨAn = 1
n arctan
(∑ i∈Asub
wi sin (nφi)∑ i∈Asub
wi cos (nφi)
) , (3.9)
where the sum is restricted to the particles in the subevent. The difference ∆Ψsub = ΨA − ΨB already gives the accuracy of the measured
subevent plane (or subevent resolution, ressub): ressub =
⟨ cos [ n(ΨAn −ΨR)
]⟩ = √ 〈cos [n(ΨAn −ΨBn )]〉. (3.10)
At very low resolution (〈cos [n(Ψn −ΨR)]〉 ≪ 1) the equation 3.8 is approx imately linear in χm, which is proportional to the square root of the multiplicity M used in the calculation. Taking into account that the fullevent has twice the multiplicity of the subevent, a first estimate of the fullevent resolution is given by:
〈cos [n(Ψn −ΨR)]〉 ≈ √ 2 〈cos [n(ΨAn −ΨBn )]〉. (3.11)
For higher values of the resolution (〈cos [n(Ψn −ΨR)]〉 ≈ 1) this approxima tion does not hold, and to correctly extrapolate the fullevent resolution (with χ and σ of eq.3.8) an iterative process is needed (and it has been implemented in the analysis code): the first estimate of the fullevent resolution from eq.3.8 (if√ 2 × ressub < 1, otherwise the subevent resolution is used) is applied to vobsn to
obtain v′n, which is then used to calculate χn. From equation 3.8 a new value reso lution is calculated and applied again to vobsn to obtain v′′n, and so on. The iteration goes on until the variation at each step becomes smaller than a lower limit, at that point the procedure stops and the last calculated resolution is taken. It turns out that such a procedure quickly converges, and just a few steps are needed to obtain a stable estimate of the fullevent resolution.
3.2.2 Autocorrelation The flow coefficients vn are meant to measure the average correlation between each particle and the rest of the event. However, the presence of the particle i in the
5Other ways to split the event into two separate equal multiplicity subevents can be used, e.g. separating positively and negatively charged particles. Any method could work, as long as no bias is introduced in the azimuthal distribution. In the present analysis only η and random subevents have been used, in the first case particles are simply divided into positive and negative pseudorapidity, in the second case particles are randomly separated into two arrays of equal multiplicity.
50 Flow Analysis in ALICE
calculation of the event plane slightly moves the direction of ~Qn toward the direction of ~pi, introducing a small but not negligible ‘spurious’ correlation between φi and Ψn, and therefore, a bias on the flow measurement.
There are two ways to avoid autocorrelation, both implemented in the flow analysis code:
Subevent correlation: the event is splitted into two subevents and each particle i is correlated to the event plane angle Ψin calculated from the opposite sub event. The average vn is calculated as:
vn = 1
2
( vAn + v
B n
) , (3.12)
where vAn and vBn are calculated as:
vAn = 1
N/2
∑ i∈A
( cos [ n(φi −ΨBn )
]) 〈cos [n(ΨBn −ΨR)]〉
, (3.13)
where the term at the denominator is the resolution of the subevent.
Fullevent correlation: for each particle i the event plane angle Ψn,i is recalculated by subtracting the ~pi from ~Qn. Eq.3.7 is rewritten as:
vn = 1
N
N∑ i=1
vn,i = 1
N
∑N i=1 (cos [n(φi −Ψn,i)]) 〈cos [n(Ψn −ΨR)]〉 , (3.14)
where φi is the azimuthal angle of the particle i, and Ψn,i is the event plane angle, calculated from a selection of particles that does not contain the particle i. The denominator expresses the resolution of the fullevent.
As shown, in the first case vobsn is corrected by the resolution of the subevents (equation 3.10), in the second case vobsn is corrected by the fullevent resolution, calculated from equation 3.8. In other words, for the subevent correlation v2 = vsub2 /ressub (eq.3.10), for the fullevent correlation v2 = vfull2 /resfull (eq.3.8). Since the resolution is proportional to
√ M , resfull > ressub and as well vfull2 > vsub2 . The
ratio between vn and the resolution should compensate for the difference, so that the flow coefficients calculated in both ways are expected to be equal within the statical error 6.
Because the ~Qn vector is better defined when more particles are used in its cal culation (see eq.3.3), the fullevent correlation seems to be the best choice, however the subevent correlation works better in reducing nonflow effects (see section 4.1). Applying the two methods in parallel provides a useful crosscheck.
6This may not be true when nonflow effects are present, see section 4.1.
3.2 Event Plane Analysis method 51
The effect of autocorrelations is larger at lower multiplicity and it becomes smaller when the multiplicity is high (and the bias of a single particle on the direc tion of ~Qn becomes less important). However, if also the ‘true’ flow is small (e.g. central events), the autocorrelations can dominate the measurement.
For simplicity of notation, here and in the following the event plane angle is just written as Ψn, giving for granted the above discussion.
3.2.3 Weights Weight coefficients wi are used in the calculation of the ~Qn vector to make it a better defined observable and increase the resolution of the event plane. Weights should be chosen in such a way to enhance the contribution of particles with higher flow, since they define the direction of ~Qn better. Ideal weights should be proportional to vn itself [127].
Experimentally it is observed that the elliptic flow increases with the trans verse momentum (high pT fragments are more likely radiated along the reaction plane) [128], therefore a good choice of the weights for the calculation of ~Q2 can be the transverse momentum itself or some monotonic function wi(pT ) ∝ pT . In the analysis presented in the following chapters, the choice of the weight was deter mined by the shape of the input v2(pT ) used in the simulation 7, and therefore:
wi(pT ) =
{ pT/p
sat T
1
pT < p sat T
pT ≥ psatT (3.15)
with psatT = 2 GeV/c. This choice gives a small gain in resolution with respect to unitary weights (see ch.4 and 5).
As already mentioned, for odd harmonics of the event plane the coefficients wi must change sign for forward/backward rapidity. The weights for the calculation of ~Q1 can be chosen proportional to y or η (which change sign in the two opposite hemispheres).
The weight coefficients wi must also compensate for the azimuthal anisotropy in the detector acceptance, which may add spurious contributions at higher harmonics to the measured flow. However, such kind of correction is very detector specific and can be directly calculated from the observed dN/dφ distribution of reconstructed data before running the flow analysis (see the following section for the details).
3.2.4 Flattening Weights and Reconstruction Efficiency Due the geometrical arrangement and the segmentation of the detecting volumes (in particular the TPC), the reconstruction efficiency in the ALICE central barrel is φ dependent.
7In the real experiment the choice of weights is done in a later stage: once the shape of v2(pT ) has been reconstructed by running the analysis with unitary weights, results can be refined by applying weights that are proportional to the observed v2(pT ).
52 Flow Analysis in ALICE
(deg) φ 0 50 100 150 200 250 300 350
φ
dN /d
0
50
100
150
200 310× (MC)φdN/d
inCut) ESD
(NφdN/d inCut)
ESD (N’φdN/d
) ESD
(N’’φdN/d
(deg) φ 0 50 100 150 200 250 300 350
w gt
φ
0.8
0.9
1
1.1
1.2
1.3
inCut) ESD
weights (Nφ inCut)
ESD weights (N’φ
Figure 3.2. (a) dN/dφ distribution of (from top): all generated particles (MC input), recon structed tracks and reconstructed primary particles in the ESD passing the minimal event plane cuts (see sec.5.3.2), and all reconstructed secondaries in the ESD. The distribution of reconstructed tracks shows the 18 sectors of the TPC. (b) Efficiency correction (φ weights) calculated with eq.3.16, for all reconstructed tracks passing the minimal cuts, and for re constructed primaries. Plot generated from the all simulated Hijing + GeVSim events (see chapter 5 for the simulation details).
The overall dN/dφ distribution of fig.3.2(a) clearly shows the radial segmen tation of the ALICE TPC. The dips in the distribution of reconstructed primaries correspond to the azimuthal coordinate of the cracks between the 18 sensitive pads on the outer walls of the TPC (see sec.2.1.2). The distribution of secondaries shows a double peak in correspondence of each dip, due to the amount of particles pro duced in the 18 iron bars of field degrader, located between each sensitive pad at the innermost radius of the TPC.
This azimuthal anisotropy in the reconstruction efficiency may introduce a spu rious 18th harmonic component to the observed particle distribution, biasing the direction of the reconstructed reaction plane. To correct for this effect we assume that the cumulative φ distribution from a large sample of events is flat in an ideal detector, this is generally true due to the random orientation of the impact parameter of the collision with respect to the laboratory frame.
This φ dependence of the reconstruction efficiency can be corrected by intro ducing φ weights inversely proportional to the azimuthal efficiency of each φ bin in the reconstructed dN/dφ distribution. Each track i gets the weight wφi calculated as:
w(φi) = 1
Nφi × ∑Nbins
i Nφi Nbins
, (3.16)
where φi is the azimuthal angle at which the track i is emitted, andNφi is the discrete bin in the histogram that contains φi. The obtained weights are then used, together with the pT (or η) weights (see sec.3.2.3), in the calculation of ~Qn.
The φ weights must be calculated specifically for the set of cuts in use, to take
3.3 Implementation 53
into account the reconstruction efficiency of the specific track selection. Moreover, this procedure can be directly applied on real data without any further use of simu lations (this is done, for instance, in the event plane analysis at STAR).
However, the azimuthal efficiency is not the same at all transverse momenta, be ing almost zero for very high pT tracks flying along a crack in the TPC. A more pre cise estimate of the weights should be done in pT bins, creating a two dimensional weight array w(pT , φ), but this approach would require a much larger statistic than the one available and therefore the φ weights in use for the present analysis have been calculated irrespectively of pT .
3.2.5 Differential & Integrated Flow In the present analysis, the elliptic flow has been studied as a global property of the whole event (we talk in this case of ‘integrated’ flow), and with respect to the transverse momentum of the particles pT (we talk in this case of ‘differential’ flow). The dependence of v2 with respect to other kinematic variables, such as y or η, is approximated to be flat (see sec.1.3.4) and it has not been studied further.
Assuming the pT bins used in the analysis are small enough (in the order of the detector resolution, see sec.5.1), we can consider the efficiency as approximately constant in each pT bin. The differential flow is therefore calculated by restricting the average of equation 3.7 to separate kinematic windows:
v2(pT ) = 〈v2〉pT res2
= 〈cos [2(φ−Ψ2)]〉pT 〈cos [2(Ψ2 −ΨR)]〉 . (3.17)
The differential flow coefficients, calculated at each pT bin, describe the pT depen dence of v2.
Existing results show that the differential shape of v2(pT ) is a monotonically increasing function of pT (see sec.1.3.4). In a real experiment the reconstruction efficiency is generally not flat with respect to the transverse momentum, and there fore to correctly calculate the total (integrated) 〈v2〉 of the event, the particle average must be weighted by the reconstruction efficiency as a function of pT :
〈v2〉 = 1 Ntot
∫ ∞ pT=0
v2(pT ) dN ′
dpT dpT =
1
effN obstot
∑ pT bins
v2(pT )× dN obs
dpT × eff(pT ).
(3.18) The contribution to 〈v2〉 from low pT part of the spectra (pT < 100 MeV/c, where the reconstruction efficiency is ∼ 0) is evaluated by extrapolating both v2(pT ) and dN/dpT to pT = 0. See sec.5.3.4 for a practical example.
3.3 Implementation The event plane analysis has been implemented for ALICE as a collection of ROOT C++ classes, under the name of AliFlow package. Its structure is similar to the
54 Flow Analysis in ALICE
StFlow package [96], widely used for flow measurements by the STAR collabora tion, starting from which the AliFlow classes have been developed.
The main object of the analysis is the AliFlowEvent, a high level object built from the ALICE Event Summary Data AliESD and optimized for the flow analy sis. The most useful ESD information is extracted and organized into an efficient structure, which then is submitted to the analysis chain.
Unlike the StFlow package, that was built over a more complex framework (such as the Star Class Libraries SCL [96] [129]), the AliFlow package only depends on ROOT, which improves the portability. AliFlowEvents can be created from the KineTrees contained in the kinematic files produced by the event generators, while their creation from the AliESDs only requires some libraries from AliRoot. AliFlow Events can be stored for later processing, and the analysis can be entirely executed in ROOT.
However, the parallel processing of ESDs and KineTrees and the one to one comparison between reconstructed and simulated particles (from which the effi ciency is calculated) need the AliAnalysisTask machinery to be in place [130], and therefore the whole AliRoot framework (see sec.2.2).
AliRoot
KineTree
LHCGeant3
AliESD
AliFlowMaker AliFlowEvents (Kine)
AliFlowEvents (ESD)
efficiencyEffHist
Transport Reconstruction
(Hijing , GeVSim, ...)Event Generators
AliAnalysisTaskRL
Figure 3.3. Flow diagram of the production chain. Events are generated, transported and reconstructed within the AliRoot framework (the reconstruction phase uses the same al gorithm that will be used on real LHC events). The AliFlowMaker (embedded into an AliAnalysisTaskRL) translates the produced ESDs and KineTrees into AliFlowEvents, on which the flow analysis is later executed (see fig.3.4). Using the same set of cuts applied in the analysis (represented by the small diamond), the efficiency histograms are also filled at this step by a one to one comparison of simulated and reconstructed particles.
3.3 Implementation 55
Q2 , Ψ2
Resolution
φ Weights
AliFlowEvent
AliFlowTracksEvent info AliFlowV0s
v2(pT)
v2 obs(pT)
Q2 A , Q2
B→ ∆Ψ2 sub
Correlation
Flattening dN/dφ
‹ v2 ›
efficiency (pT)
Event Plane
AliFlowSelection
Figure 3.4. Flow diagram of the event plane analysis chain. After a first loop, where the φ weights are calculated from the dN/dφ distribution of all events, the analysis proceeds event by event and calculates the ~Q2 vector and the ‘observed’ event plane angle Ψ2 (for full and subevents). Selected particles (and V 0s) are then correlated to the ‘observed’ event plane to obtain vobs2 as a function of pT , while the event resolution is calculated from subevents as described in sec.3.2.1. At the end of the event loop, the observed v2(pT ) is corrected by the average event plane resolution, and the integrated elliptic flow is calculated taking into account the efficiency corrections, as a function of pT , from the efficiency histogram (see fig. 3.3).
3.3.1 Analysis Strategy The flow analysis package is organized in two main steps (see fig.3.3 and 3.4):
(i) Flow Maker: A parser reads the reconstructed Event Summary Data files and the KineTree files produced by AliRoot and creates AliFlowEvents. Only the most useful observable for the flow analysis are stored as data members of the AliFlow Event class, i.e. few global event observables (main vertex position, particle multiplicity), the kinematic of the reconstructed tracks (pT , η and φ) and the most sensitive variable for selecting good track candidates together with the p.Id. signal from the central barrel detectors. V 0 candidates can be also stored in a separate array.
56 Flow Analysis in ALICE
Very loose quality cuts are applied at this step (e.g. only tracks with TPC signal from the ESD, only primary hadrons from the KineTree). • Data are organized into AliFlowEvent objects, which can be stored on
disk for later analysis. • If the AliESD loop is executed in an AliAnalysisTaskRL or an AliSelec
torRL, efficiency corrections are also calculated 8.
Fig.3.3 gives a schematic view of the creation of AliFlowEvents, starting from an AliRoot simulation.
(ii) Flow Analysis: The analysis runs over AliFlowEvent objects, it can be executed on the fly during the parsing process, or executed later on stored files. Fig.3.4 gives a schematic view of the event plane analysis starting from the AliFlowEvent.
• A first loop on the event sample produces the flattening φ weight his togram (see section 3.2.4), this is usually done over the entire sample available.
• A second event loop performs the calculation of the event plane event by event and the correlation analysis, the observed elliptic flow vobs2 (pT ) of selected particles is stored in a profile histogram (ROOT TProfile).
• At the end of the loop, the resolution of the full and the sub events is calculated by averaging cos(∆Ψ2) from all events in the selected cen trality class. The observed v2(pT ) is corrected by the event plane reso lution, if efficiency corrections are available, the integrated flow 〈v2〉 is also calculated.
In a single loop, different track selections can be used for the calculation of the event plane, the resulting v2 and event plane resolution are calculated in parallel. The selection of tracks used for the event plane determination must be defined previous to the first loop, to correctly calculate the flattening φ weights. A separate selection is applied to the particles entering the correlation analy sis (the ones entering the average 〈cos 2(φ−Ψ2)〉), which can include more strict cuts for the isolation of primaries or for particle identification. The effi ciency corrections must be calculated according to the set of cuts used for the correlation analysis.
The complete list of C++ classes that have been implemented for the event plane analysis is given in appendix A, together with a brief description of their purposes.
8The functionalities of the AliRunLoader (in particular, the access to the AliStack) are needed to make a one to one comparison between reconstructed tracks and simulated particles (see sec.2.3).
3.4 Other Analysis Methods 57
3.3.2 The AliFlow package The AliFlow package is included in the standard release of AliRoot 9, and can be compiled together with the AliRoot framework.
The classes related to the analysis (everything except the AliFlowMakers) can also be exported and compiled as a standalone ROOT library 10, and loaded into ROOT to execute the analysis over existing AliFlowEvents.
A full set of macros, to make the AliFlowEvents, to produce φ weights and to run the analysis, is also included in the package. Every class is provided with an inline documentation that can be compiled with the standard ROOT THtml class to produce a HTML layout [131].
However, due to the many recent modification in AliRoot (e.g. the AliESD has been replaced by the AliAOD, the AliAnalysisTaskRL has been taken out), some of the functionality described above may not be working at the present time. The latest version of AliRoot on which the AliFlow package has been tested to work is v404Rev14.
3.4 Other Analysis Methods Beside the event plane analysis described in sec.3.2, other methods to extract the flow coefficients vn from heavy ions data have been developed in the last years. A brief description of these methods will be given in this section, pointing out their main advantages and disadvantages (see sec.3.4.1).
The event plane analysis can be seen as a particular case of twoparticle corre lation method [132], in this view the analysis can be extended further to 2kparticle correlations calculated with the ‘cumulant’ method [133], and when k is pushed to infinity we end up with the ‘LeeYan Zero’ method [134].
PairCorrelation method
Since all particles are correlated to the reaction plane, they are also indirectly corre lated to each other, the anisotropic flow can therefore be measured by averaging the the observed twoparticle azimuthal correlations, without previous determination of the event plane angle [132].
In this approach, the integrated flow at the nth harmonic is calculated as:
〈vn〉2 = 〈cos [n(φi − φj)]〉 , (3.19) where i and j run over all the particles in the event, and the average is taken over the whole centrality class of interest.
9The package is included into the ‘Physics Working Group 2’ (soft physics) folder, under Ali Root/PWG2/FLOW.
10The package is compiled with ‘rootcint’ [94] and ‘gcc’ [95] as a shared object library (named AliFlow.so). These kind of libraries can be loaded in ROOT (see sec.2.2.1).
58 Flow Analysis in ALICE
From the integrated flow, the differential flow can be calculated as:
vn(pT ) = 〈cos [n(φi − φj)]〉
〈vn〉 , (3.20)
where j is now limited to a specific pT bin, and i runs over all the particles in the event.
The pair correlation method does not need to include corrections for the detector anisotropy [135]. On the other hand, the method does not reconstruct an event plane.
Cumulant method
Eq.3.19 can be seen as the construction of a twoparticle correlator, similar to the 2nd order cumulant. More in general, the cumulant approach considers multiparticle correlations [136].
The cumulant of 2kparticle azimuthal correlations cn {2k} (where n is the har monic, and 2k is the order of the cumulant), is a quantity built with all the measured azimuthal correlations up to order 2k:
cn {2k} = ⟨ ein(φ1+...+φk′−φk′+1−....−φ2k′′ ),
⟩ (3.21) with k′ + k′′ ≤ 2k. For k = 1 the real part of the eq.3.21 reduces to eq.3.19:
ℜ (cn {2}) = ℜ (⟨ ein(φ1−φ2)
⟩) = 〈cos [n(φ1 − φ2)]〉 . (3.22)
The advantage of the cumulant of 2kth order is that it is insensitive to the contri bution of lower order correlations, so that only the genuine 2kparticle correlation remains.
From cumulants is possible to calculate the integrated flow Vn, defined here as the average projection over the reaction plane of the event flow vector ~Qn (see also eq.3.3):
Vn =
⟨∑ j
cos [n(φj −ΨR)] ⟩
= Mvn. (3.23)
Depending on the order of the cumulant, Vn is calculated as:
Vn {2}2 = cn {2} , Vn {4}4 = −cn {4} , Vn {6}6 = cn {2} 4
, . . . (3.24)
Differential flow can also be determined with cumulants, for the details refer to [137].
From the practical point of view, the calculation of cumulants starts from the generating function [137]:
Gn(z) =
⟨ M∏ j=1
[ 1 + wj
( ze−inφj + z∗einφj
)]⟩ , (3.25)
3.4 Other Analysis Methods 59
where z is a complex number and z∗ is its complex conjugate. The product runs over all the particles in each event, and the average is taken over all events. The 2kth order cumulant is given by the coefficient of z2k in a series expansion of the logarithm of Gn. The function Gn is evaluated in a few points in the complex plane around the origin z = 0, and by taking the logarithm at each of these points it is possible to interpolate the next derivatives of ln(Gn) and obtain the cumulants cn {2k}.
Thr elliptic flow calculated with the 2kth order cumulant is only affected by nonflow correlations from 2kparticles, which scales as N1−2k (N is the total mul tiplicity of particles in an event [133]). Therefore, the 4th order cumulant is already enough to systematically remove all the nonflow effects due to two and three particle correlations (e.g. particle decays), but it does not remove any genuine four or more particles correlations (e.g. jets).
Since flow is a collective effect, higher order cumulants can be preferable to re move nonflow correlations (the 2kth order cumulant removes nonflow effects due to (2k−1)particle correlation), but their calculation become more and more tedious as k increases. The cumulant method can be used for different orders and compared to each other to crosscheck the results 11 [141]. Also the cumulant method does not measure an event plane.
LeeYan Zero method
Extending the cumulant method to an infinite order cumulant cn {∞} leads to the LeeYan zero method [134].
Similarly to the cumulant method, a generating function is defined [142]:
Gθ(r) =
⟨ M∏ j=1
[1 + irwjcos (n (φj − θ))] ⟩ , (3.26)
where r is a real positive number and θ is an angle between 0 and π/n. The product involves all the particles in each event, and the average is taken over the events.
The behavior of the zeros in the generating function Gθ reflects the presence of collective flow in the system [134]. In particular the first zero of the generating function is directly related to the magnitude of anisotropic flow.
In practice, the LeeYan zero method starts with the calculation of Gθ for many values of θ, and for each the first minimum of the absolute value
∣∣Gθ(r)∣∣ is cal culated. The value of r at the minimum rθ0 is a good approximation of the first zero.
The integrated flow Vn (as defined in eq.3.23) is then calculated as:
V θn {∞} ≡ j01 rθ0 , (3.27)
11In the contest of STAR, a comparison between v2 {2} and v2 {4} is used to estimate the magni tude of nonflow effects [138] (see also [139] and [140]).
60 Flow Analysis in ALICE
where j01 is a constant (j01 = 2.4, see [142] and [134]). The integrated flow va lues calculated in this way are then used to obtain the flow coefficients at different harmonics and the differential flow [142].
The LeeYan zero method provides the smallest systematic error of all the meth ods, practically removing all nonflow effects (from any kparticle correlation). The main limitation of this method comes from statistical errors, which decrease only logarithmically with the number of events and dramatically depend on the χn (χn ≃ vn
√ 2M , see eq.3.8), therefore it is not always applicable.
In some very recent developments, a way to estimate the event plane using the LeeYan zero method has been devised, by recasting it in a form similar to the standard event plane analysis [143]. Nonflow correlations are eliminated by using the information from the length of the flow vector, in addition to the eventplane angle.
3.4.1 Applicability A detailed discussion about the sensitivity of the three methods, with an estimate of their systematic and their statistical errors, is given in section V and V II of reference [134]. The main conclusions are summarized in the following.
Systematic Error
The main systematic error of flow measurement is due to fewparticle correlations (resonance decays, jets, transverse momentum conservation), generally classified as ‘nonflow’ (see sec.1.4). Nonflow effects become more important at low multipli city and low genuine collective flow (see chap.4). Twoparticle correlation methods, as the event plane method itself, cannot disentangle the genuine flow from any other source of azimuthal correlation, therefore the evaluation of the systematic error due to nonflow has been studied in details in sec.4.1 by applying the event plane method to Hijing simulations without flow. On real data (e.g. at STAR) nonflow effects are measured from the difference between v2 {2} obtained with the event plane method and v2 {4} obtained with 4th order cumulant [138].
The 2kth order cumulant removes nonflow up to (2k − 1)particle correlation, therefore the systematic error due to nonflow becomes smaller and smaller when using higher orders cumulants. The minimum systematic error is reached with the LeeYan zero method.
Roughly, the relative systematic error of the event plane method is:
δvn vn
= O (
1
Mv2n
) . (3.28)
While for the (2k)th order cumulant (with 2k > 4) the systematic error is: δvn vn
= O (
1
M2k−1v2kn
) , (3.29)
3.4 Other Analysis Methods 61
only if vn < 1 M
1− 1 2k−2
, otherwise it becomes of the same magnitude of eq. 3.30. The LeeYan zero method has the smallest systematic error, which is approximately:
δvn vn
= O (
1
M
) . (3.30)
Flow is unambiguously identified when it is larger than other spurious corre lations, this defines the conditions under which any of the three methods can be applied. Therefore, the condition of applicability of the twoparticle correlation method is vn ≫ 1M1/2 , it is vn ≫ 1M3/4 for the 4th order cumulant, and vn ≫ 1M for the LeeYan zero method.
Statistical Error
The statistical error on flow measured with the three methods depends in general on the charged multiplicity and the magnitude of genuine flow, summarized by the parameter χn ≃ vn×
√ M (which is related to the resolution of the event plane, see
eq.3.8), and on the number of events entering the analysis. For values of χn ≫ 1 the relative statistical error associated to the three methods
is in the order of: δvn vn
= O (
1
χn √ N
) , (3.31)
where N is the number of events entering the analysis. The statistical error of the cumulant method becomes much larger for χ < 1,
and it increases with the cumulant order. The LeeYan zero method has the same problem for small values of χ and it is not very reliable for χ < 0.5. The statistical error of the event plane method always has the order of magnitude of eq.3.31, and a larger sample of events may compensate for low values of χn.
At LHC energy the resolution parameter is expected to be well above 1 on a wide range of centrality (see sec.1.3), so in principle all the methods can be applied.
In the present thesis, only the event plane method has been fully implemented and tested so far for the ALICE environment. The event plane method has been chosen because, due to its easy implementation and low level of abstraction, it is the most appropriate for the first day physics at ALICE:
• the method is quite intuitive and the mathematics behind is simpler than that of any other method, therefore it is easier to check the consistency of the results;
• the method provides a direct estimate of the event plane angle Ψn, which is a needed input for other kinds of analysis (e.g. HBT, jets);
62 Flow Analysis in ALICE
• when the analysis is done on simulations, the reconstructed Ψn can be easily compared with the input of the simulations to optimize the applied cuts (see sec.5.3.2);
• theoretical predictions show that elliptic flow will be the dominant contribu tion to the azimuthal anisotropy of the events at LHC energy, minimizing the systematic due to nonflow;
• the resolution parameter χ is expected to be well above unit at LHC, but even in a worse scenario (χ ≪ 1) the event plane method provides the lowest statistical error on the measured vn (making it the most convenient method to quickly obtain some results);
• since the event plane method cannot disentangle genuine collective flow from nonflow correlations, it can be used on Hijing simulations to experimentally characterize nonflow effects (as it is done in sec.4.1).
However, for a longer term analysis at ALICE, the event plane method will be probably replaced by the more accurate ‘new’ LeeYan zero method [143], the implementation of which is currently under development.
Chapter 4
Feasibility of the Event Plane analysis
A large source of uncertainty on the measurement of elliptic flow is due to the un known magnitude of nonflow effects at LHC energies, i.e. fewparticle correlations not related to the reaction plane, such as jets and resonance decays.
In the present approach, nonflow effects have been simulated using Hijing (see sec.2.2.3). A first set of Hijing simulations has been produced with no genuine el liptic flow, in order to study the magnitude of the ‘apparent’ v2 that would be recon structed with the event plane analysis, and to characterize its multiplicity (centrality) dependence (see sec.4.1).
The magnitude of nonflow reconstructed from the Hijing events has been com pared with an extensive set of GeVSim simulations, produced with different combi nations of the two most sensitive observables, i.e. the multiplicity and the magnitude of v2, leading to the feasibility ‘grid’ in fig.4.7 (see section 4.2).
Finally, a new set of Hijing simulations has been produced and boosted with the flow AfterBurner, to study the interplay between genuine flow and nonflow effects (see sec.4.3).
4.1 NonFlow estimate with Hijing Nonflow effects have been studied using Hijing simulations, which by construction have no genuine flow (v2 = 0). This approach offers the possibility to character ize the contribution of nonflow effects alone and to determine their magnitude, by applying the event plane analysis (described in sec.3.2) to the simulated events and comparing the azimuthal correlation between subevents and between the recon structed event plane and the simulated reaction plane.
A full detector reconstruction has been excluded because the aim of this prelim inary study is to isolate and measure the nonflow correlations generated by Hijing. Therefore, the analysis was executed over the primary particles in the Hijing Kine
64 Feasibility of the Event Plane analysis
Trees 1. Hijing, as briefly introduced in sec.2.2.3, includes all known physics effect aris
ing from the superposition of pp collisions, such as jets, resonances and cascade decays. The implementation of Hijing also include the possibility to switch on an internal parametrization of jetquenching effects, which reproduces the energy lost in the medium by the leading parton of the jet.
b (fm) 0 2 4 6 8 10 12 14 16
,K ,p
) pi
(
<0 .5
ηη /d
ch dN
0
2000
4000
6000
8000
↓
hijing j.q. 0<b<16 fm
hijing j.q. 7<b<14.5 fm
,K,p) pi (<0.5ηη/dchdN 0 2000 4000 6000 8000
ev ts
N
1
10
210
310
hijing j.q. 0<b<16 fm
hijing j.q. 7<b<14.5 fm
hijing no j.q. 7<b<14.5 fm
Figure 4.1. (a) Charged multiplicity in one unit rapidity as a function of the impact param eter of the collision. The arrow on the x axis indicates what we consider ‘most central’ in our rescaled definition of centrality. (b) dNch/dη distribution for 25.000 Hijing events with jetquenching and resonance decays switched on, showing both the minimum bias distribu tion (simulated on the full range of impact parameters, 0 < b < 16 fm) and the ‘rescaled’ one (with 7 < b < 14.5 fm). For comparison, the dNch/dη distribution of 25.000 Hijing events, with jetquenching switched off, is also shown (events simulated on the ‘rescaled’ impact parameter range: 7 < b < 14.5 fm).
Few remarks must be made about the current implementation of Hijing:
• The charged particle multiplicity produced is a factor 2 higher than the cur rent predictions for LHC (see sec.1.3.3). As shown in fig.4.1(a), a way to reduce the produced multiplicity is achieved by rescaling the impact param eter. Therefore, we define ‘most central’ a collision with impact parameter b = 7 fm (leading to a charged multiplicity dNch/dη = 2500 ± 300), and ‘most peripheral’ a collision with impact parameter b = 14.5 fm (where the charged multiplicity can be as low as few particles per event: dN/dη ∼ 0) 2.
1The KineTree is the list of all simulated particles (see chap.2.2.2). Detector effects are treated in a separate step (see sec.5.1).
2In this set of Hijing simulations, the multiplicity is about 50% higher than the prediction given in sec.1.3.3, leaving room for the large uncertanties of the extrapolations of dN/dη in PbPb collisions at LHC. The upper limit on b has the purpose of reducing the number of events with very low mutiplicity, since the domain of ultraperipheral collision is outside the scope of the flow analysis (see also sec.1.3).
4.1 NonFlow estimate with Hijing 65
• The rescaling of the impact parameter introduces a clear bias in the event kinematics: a larger impact parameter implies less binary interactions and therefore a lower number of produced jets. Unfortunately the consequences of this approach on nonflow effects are not easy to quantify and the study of particle production processes in Hijing is beyond the purposes of the present thesis. However, a comparison with existing measurements made at STAR (where nonflow is calculated from the difference between v2 {EP} and v2 {4} [139] [140]) shows that the present approach well reproduce both the magni tude and the centrality dependence of nonflow effects.
• The effects of jetquenching and resonance decays (which can be turned on and off in Hijing) have been studied separately to characterize the contribution of each of them to the observed nonflow. One side effect of jetquenching is that the multiplicity becomes, on average, 60 − 70% higher, probably due to the fact that the energy lost in the medium by the leading partons is trans formed into soft radiation (low pT particles). For the selected range of impact parameters, fig.4.1(b) shows the multiplicity distribution with and without jetquenching.
As discussed in sec.3.4 the event plane analysis is equivalent to a twoparticle correlation method, where the azimuthal correlation is quantified by:
⟨ v22 ⟩ = 〈cos [2(φi − φj)]〉 , (4.1)
where the average is taken between each pair of particles i, j in the event. By its definition, any kind of fewparticles correlation contributes to the reconstructed v2.
According to the definition given in sec.1.4, the correlation between the recon structed event plane and the true reaction plane (i.e. Ψtrue2 − Ψobs2 ) due to ‘flow’ can be compared to the observed subevents correlation ∆Ψsub2 = ΨA2 − ΨB2 to es timate the magnitude of nonflow effects. When the ‘true flow’ is 0, the observed subevents correlation gives a direct measurement of the ‘nonflow’ contributions.
Fig.4.2 shows the average cos [2 (∆Ψ2)] for four sets of 25000 Hijing events each (with 7 < b < 14.5 fm), testing all possible combinations of jetquenching and strong resonances decays on/off. The leftmost bin shows the ‘true’ resolution of the 2nd harmonic event plane, defined as:
⟨ cos [ 2 ( Ψtrue −Ψobs2
)]⟩ , (4.2)
while the following 3 bins (bin 2, 3 and 4) show the ‘observed’ resolution, extra polated from the observed subevent correlation ∆Ψsub2 using an iteration of eq.3.8 (see sec.3.2.1). Approximately:
〈cos [2(Ψ2 −ΨR)]〉 ≈ √ 2 〈cos [2(ΨA2 −ΨB2 )]〉 , (4.3)
66 Feasibility of the Event Plane analysis
True rndsub subη sub (gap)η
)> Ψ∆
< co
s(2
0
0.1
0.2
0.3
0.4
0.5
0.6

RHijing  DcayRHijing  JQRHijing  JQ + DcayRHijing
e.p. Resolution
True rndsub subη sub (gap)η
)> Ψ∆
< co
s(2
0
0.1
0.2
0.3
0.4
0.5
0.6

RHijing  DcayRHijing  JQRHijing  JQ + DcayRHijing
wgt) T
e.p. Resolution (p
Figure 4.2. True and observed event plane resolution (defined as ⟨cos [2 (Ψtrue −Ψobs2 )]⟩, see sec.3.2.1), for 4 different sets of 25000 Hijing events (with 7 < b < 14.5 fm), in all possible combinations of jetquenching and strong resonance decays on/off, using different definitions of subevents (see text for the details). The two plots show the resolution of the event plane calculated without and with pT weights respectively.
where ΨA2 and ΨB2 are the event plane angles calculated from two equal multiplicity subevents. This is done for three different choices of subevents (random sub events, η subevents and η subevents with a gap of 1 unit at midpseudorapidity). In fig.4.2(a) the 2nd harmonic ~Q vector is calculated with unitary weights, while in fig.4.2(b) pT weights are used (see sec.3.2.3).
It can be immediately noticed that the presence of jetquenching effects intro duces a small correlation with the true reaction plane Ψtrue (the ‘true’ resolution is larger, 1st bin in fig.4.2(a) and (b)), and as expected, the use of pT weights in the calculation of ~Q2 enhances the resolution of the event plane for both genuine flow or spurious nonflow effects (see also fig.4.3).
But the most interesting result is the large amount of nonflow correlations that is observed using the event plane method. The figure, however, indicates a way out: due to the fact that jets and decays cause particles to propagate in the same direction in φη 3, by splitting the event into two rapidity or pseudorapidity intervals most of the nonflow correlations are suppressed, and an even better suppression is achieved by choosing the subevents well separated, e.g. by using a gap at midrapidity (but consequently the statistic is reduced).
It is also interesting to note that the presence of a detectable event plane (due, in this case, to the correlation of the products of jetquenching with the true Ψ) makes the observed event plane resolution less sensitive to the choice of the subevents (see second and last bin of fig.4.2(a)). This even better applies when the genuine elliptic flow is larger (see sec.4.3). In other words, for a large genuine elliptic flow signal the nonflow correlations become less important.
3See, for instance, chap.6.8 of the reference [25].
4.1 NonFlow estimate with Hijing 67
Following the prescriptions of the event plane analysis [123], from the observed values of cos
[ 2 ( ∆Ψsub2
)] the resolution of the event plane is calculated, which
corrects the observed vobs2 to take into account the uncertainty in the determination of the reaction plane (see sec.3.2). In case nonflow is dominating, the ‘apparent’ event plane resolution (which is always small, since the reaction plane is not clearly defined) gives a large boost to the observed vobs2 (which is nonnegligible, due to few particle correlation effects) leading to overestimate of the measured elliptic flow.
(GeV/c) T
p 0 1 2 3 4 5 6 7 8 9 10
(%
) 2
v
0
5
10
15
20
25  noJQ noDcaytrue2v
obs 2 v
res 2 v  JQ Dcaytrue2v
obs 2 v
res 2 v
Figure 4.3. Elliptic flow v2 as a function of pT calculated w.r.t. the true reaction plane (squares) and the observed event plane with and without resolution correction (triangles and circles, respectively), the resolution is calculated from ∆Ψrnd−sub2 . Two sets of 25000 Hijing events are shown (7 < b < 14.5 fm), one with jetquenching and strong resonance decays switched off (empty markers), the other with everything switched on (full markers).
The transverse momentum dependence of nonflow effects is shown in fig.4.3. The lower sets of points show the shape of v2 as a function of pT calculated with respect to the true reaction plane. The presence of jetquenching introduces a small (〈v2〉 ≃ 0.1%) true flow effect increasing with pT , which is completely absent when medium effects are switched off.
The two sets in the middle represent the pT dependence of the ‘observed’ non flow (vobs2 calculated with respect to the observed 2nd harmonic event plane), which magnitude is roughly the same for both the inputs. Nonflow effects in Hijing are dominated by jetlike correlations, so that the small correlation with the true reaction plane due to jet quenching is washed away almost completely.
The upper sets of points show the reconstructed v2 after applying the resolution correction (calculated from random subevents). The use of this ‘apparent’ event plane resolution (only due to nonflow) results in very large values of the recon structed v2. Also the measured v2 slowly increases with pT , showing a saturation value at about 2 GeV/c (where v2 ∼ 10− 15%). However the increase is not linear, and v2 is very small for pT . 1 GeV/c, where most of the particles are produced. This leads to an integrated nonflow 〈v2〉 ≃ 1.4%, with a difference of ±0.3% de pending on the different settings of hijing 4.
4Note that this is a particlewise average for all the ‘rescaled’ Hijing events, with multiplicities
68 Feasibility of the Event Plane analysis
(GeV/c) T
p 0 1 2 3 4 5 6 7 8 9 10
(%
) 2
v
0
10
20
30
40
50
= 0100 % (average)totσ < 100)η = 020 % (dN/dtotσ
< 670)η = 4060 % (260 < dN/dtotσ > 1450)η = 80100 % (dN/dtotσ
Figure 4.4. Nonflow v2 as a function of pT for 3 centrality classes (for Hijing events with jetquenching and resonance decays on).
Since nonflow effects in Hijing arise from fewparticle correlations such as jets, one might argue that their magnitude is inversely proportional to the total multipli city. In fact, eq.4.1 states that the presence of many randomly oriented particles washes away the correlation. A clear example of this is found in fig.4.4, where the pT dependence of v2 is plotted together for 3 well separated centrality classes (most peripheral 0−20%, midcentral 40−60%, and most central 80−100%). The lower multiplicity implies the higher nonflow effect.
The subevent method provides a way to quantify the amount of nonflow cor relations. Fig.4.5(a) shows that the subevent correlation due to nonflow is approx imately independent of the multiplicity of the event, and its magnitude depends on the choice of the subevents (see caption). Eq.4.3 quantifies the azimuthal correla tion within the event and, in case of genuine flow, is used to calculate the resolution of the event plane 5 [123].
In the present simulations v2 = 0, therefore the above average is a direct mea surement of nonflow effects, which can be expressed by the parameter g˜2 in eq.4.4 [136]: ⟨
cos [ 2(ΨA2 −ΨB2 )
]⟩ = Msub(v
2 2 + g2) =
M
2 v22 +
1
2 g˜2 . (4.4)
The obtained results on g˜ are compatible with the experimental estimate of nonflow made at STAR [139] [140].
Eq.4.4 also makes clear that the nonflow contributions do not simply add to v2, but to M/2× v22 . This is what we observe in fig.4.5(b), i.e. 〈v˜2〉 =
√ g˜2/M , with g˜2
obtained from cos [ 2 ( ∆Ψsub2
)] , using random subevents (g˜2 ≃ 0.07). The two data
sets represent the subevent correlation method with η subevents and the fullevent
distributed as in fig.4.1(b). The values of 〈v2〉 are obtained by integrating v2(pT ) × dN/dpT (vres2 of fig.4.3 convoluted with the generated dN/dpT spectrum).
5The 2nd harmonic event plane resolution for the subevents is immediately given by ressub 2
=√⟨ cos [ 2 ( ΨA
2 −ΨB
2
)]⟩ , while the fullevent resolution is obtained using eq.3.8 (see sec.3.2.1).
4.2 Flow simulation with GeVSim 69
correlation method with random subevents (see sec.3.2.2). As expected, the sub event correlation method gives a lower observed v2, which is partially compensated by the lower observed resolution calculated from η subevents.
ηdN/d 0 1000 2000 3000
)> 2 Ψ∆
< co
s(2
0
0.02
0.04
0.06
0.08
0.1 )>rnd2Ψ ∆<cos(2 )>η2Ψ ∆<cos(2
)> gapη2Ψ ∆<cos(2
ηdN/d 0 1000 2000 3000
> %
2
< v
0
2
4
6 > %sub2<v
> %full2<v
/M 2
g~
Figure 4.5. (a) cos [2 (ΨA2 −ΨB2 )] with respect to the charged particle multiplicity at mid rapidity, for three different definitions of subevents (random,±η, and±η with a gap at mid rapidity). (b) Observed v˜2 from pure nonflow correlations vs dNch/dη, calculated with the sub and fullevent correlation methods (using η and random subevents respectively). The dashed line shows the upper limit on nonflow (g˜2 ≃ 0.07 calculated from random sub events).
Since the magnitude of cos [ 2 ( ∆Ψsub2
)] changes significantly for different defi
nitions of subevents (see fig.4.5(a)), with the proper choice of the analysis settings it is possible to minimize the effects. In particular, splitting the event into positive and negative rapidity suppress most of the nonflow correlation, and even more sup pression is achieved by cutting away a slice at midrapidity (limiting the analysis to 0.5 < η < 1), however this solution reduces the statistic and worsen the resolution of the reconstructed event plane.
In a realistic situation, the best choice is to use the fullevent correlation method and to extrapolate the resolution from η subevents (see sec.5.4).
4.2 Flow simulation with GeVSim As explained in the previous chapter, the accuracy of the event plane method in reconstructing the elliptic flow coefficient v2 depends on two ingredients, particle multiplicity and magnitude of the elliptic flow 6. These two quantities are combined together to give the parameter χ2 = v2×
√ 2M which is used to calculate the event
plane resolution [123]. The event plane method has a relative systematic error on the measured v2 proportional to 1/Mv22 ∝ 1/χ22 [134]. Unlike the other approaches,
6An estimate of the systematic uncertainty of different flow analysis methods was given in section 3.4, see also [134].
70 Feasibility of the Event Plane analysis
the event plane method has (in principle) no lower limit on χ2. However, when the uncertainty on v2 becomes of the same order of magnitude of the measured value, the method is not reliable any more.
To avoid any bias from theoretical predictions and from the detector acceptance, this feasibility study has been performed on a wide sample of pure Monte Carlo simulations (without full detector reconstruction), produced with different combi nations of multiplicity and v2. A total of 49 sets of GeVSim events have been simu lated in 7 centrality classes (with dN/dη ranging between 100 and 10000 particles per unit rapidity) times 7 different input values of v2 (with vsat2 = 1% to 50%).
ηdN/d 100 200 500 1000 2000 5000 10000
%
o bs
> 2 <
v
0 2 4 6 8
10 12 14 16 18
16.8
10.1
6.7
3.4
1.7
0.7
0.3
ηdN/d 100 200 500 1000 2000 5000 10000
)]> tr
ue Ψ
 2 Ψ
< co
s[2 (
0
0.2
0.4
0.6
0.8
1 16.8
10.1
6.7
3.4
1.7
0.7
0.3
Figure 4.6. Observed Elliptic Flow vobs2 (a) and fullevent plane resolution (b), with respect to the charged multiplicity at midrapidity. The 7 data sets represent the 7 input values of 〈v2〉 listed on the right side of the plot (see tab.4.1).
The parametrization of v2(pT ) is described in sec.1.3.4, i.e. v2 increases linearly for pT < 2 GeV/c and becomes constant on its saturation value at pT ≥ 2 GeV/c. The particle spectrum is an exponential pT distribution with slope parameter (tem perature) T0 = 250 MeV, flat in pseudorapidity (model n.1 in GeVSim, see [109]), generated in the kinematic range 0 < pT < 10 GeV/c and η < 1. The charged particle composition is 80% pions, 10% kaons, and 10% protons/antiprotons 7 (the settings are the same for all multiplicities).
In this way, the integrated v2 is linearly proportional to its saturation value:
〈v2〉 = 1Ntot ∑Ntot
i=1 v i 2 =
1 Ntot
(∫ 2GeV/c 0
vsat2 dN dpT
pT psatT
dpT + ∫ 10GeV/c
2GeV/c v sat 2
dN dpT
dpT
) =
= vsat2 × [
1 Ntot
(∫ 2GeV/c 0
dN dpT
pT psatT
dpT + ∫ 10GeV/c
2GeV/c dN dpT
dpT
)] = vsat2 × ks→i .
(4.5)
The above integral gives ks→i = 1/2.98. Integrated and saturation values of v2 for the present set of simulations are summarized in table 4.1.
7This is slightly different from the particle composition generated by Hijing (see sec.5.4), how ever the magnitude of v2 is the same for all particle species, and the detector acceptance is not taken into account.
4.2 Flow simulation with GeVSim 71
Table 4.1. Saturation v2 and the resulting integrated flow. vsat2 (%) 1 2 5 10 20 30 50
〈v2〉 (%) 0.3 0.7 1.7 3.4 6.7 10.1 16.8
The number of events produced for each sample is inversely proportional to the particle multiplicity (so that the analysis always involves the same number of particles): Nevts × dN/dη = 2 · 107.
The event plane analysis has been applied to charged primary particles (π±, K±, p and p¯) with η < 1. Figure 4.6 summarizes the results on the observed vobs2 and the observed event plane resolution. For higher multiplicity and higher v2 the resolution saturates (i.e. res2 ∼ 1) and the measured elliptic flow becomes more accurate (the observed v2 very well approximates the generated one and the resolu tion correction becomes negligible).
M 210 310 410
> %
2
< v
0 1 2 3 4
5 6 7
e.p. feasible e.p. unfeasible e.p. failed
/M 2
g~ = 2v~nonflow subevents∆
Figure 4.7. Feasibility of the event plane analysis with respect to the particle multiplicity and the genuine 〈v2〉. The dashed lines represent the observed ‘nonflow’ v˜2 =
√ g˜2/M ,
calculated from pure nonflow effects in Hijing using different definitions of subevents (see sec.4.1), the central value is g˜2 = 0.05, measured from η subevents. Each marker represent a set of GeVSim simulations, with M and 〈v2〉 given by the x and the y coordinate respectively. The measured χ2 = v2
√ 2M is compared to
√ g˜2 and the shape of the marker
is assigned accordingly. The event plane analysis fails when the calculated resolution is immaginary (i.e. ⟨cos [2 (∆Ψsub2 )]⟩ < 0).
Using the reconstructed values of χ2 from the above simulations (sec.4.2) and the magnitude of nonflow effects estimated with Hijing (sec.4.1) it is possible to draw figure 4.7, which summarizes the feasibility of the event plane analysis for any possible scenarios of multiplicity and v2.
The event plane method only has problems at low values of the integrated v2
72 Feasibility of the Event Plane analysis
(i.e. 〈v2〉 . 2%) and at low multiplicities, it works fine elsewhere.
4.3 Flow + nonflow To study the interplay between flow and nonflow effects, a set of 50.000 ‘rescaled’ Hijing events has been produced (7 < b < 14.5 fm, with jetquenching and res onance decays on, and no full detector reconstruction) and boosted with the flow AfterBurner (see sec.2.2.3) using values of elliptic flow extrapolated from the low est hydrodynamic estimate (with c2s = 0.22, see sec.1.3.2), v2 is assigned to each event with respect to its impact parameter b. These KineTrees could represent the input for a realistic scenario, where both flow and nonflow effects are present and v2 has a continuous dependence on the impact parameter.
ηdN/d 0 500 1000 1500 2000 2500 3000
> %
o
bs 2 <
v
0
2
4
6
8
10
> (observed)obs2<v th
res×> in2<v
>2observed <v
ηdN/d 0 500 1000 1500 2000 2500 3000
)]>
tr ue
Ψ 
o bs 2
Ψ <
co s[2
(
0
0.2
0.4
0.6
0.8
1
)sub2Ψ ∆ (from obsres )]>trueΨobs2Ψ = <cos[2(trueres
(from eq.)thres
e.p. Resolution
Figure 4.8. (a) Observed v2 vs dN/dη from 50.000 simulated Hijing events (with jet quenching and resonance decays) boosted with the flow AfterBurner, the expected values of vobs2 (vobs2 = vin2 × resth) are shown as well (see tab.4.2). (b) Theoretical, true and ob served event plane resolution vs dN/dη (calculated using eq.3.8, Ψobs2 −Ψtrue and ∆Ψηsub2 respectively).
The centrality class selection is based on the multiplicity of final state particles (as it would be in a real experiment): the dN/dη distribution has been divided in ten intervals, each one containing approximately 10% of the total number of events (i.e. 10% of the total inelastic cross section).
Due to the fluctuations involved in the particle production processes (see also sec.5.4), each multiplicity class contains events in a large range of impact parameter, and consequently, with a large spread in v2. This also reproduce a more realistic vRMS2 , and therefore a better estimate of the statistical error on the measured v2 (see also sec.5.5).
Fig.4.8(a) shows the integrated values of vobs2 (the observed v2 without resolution correction, see 3.2): ⟨
vobs2 ⟩ = 〈cos [2 (φi −Ψ2)]〉 . (4.6)
4.3 Flow + nonflow 73
Table 4.2. Summary table of the 50k Hijing + AfterBurner simulations. Per each centrality class, both the input and the reconstructed v2 and resolution are listed.
σtot % dNch dη max
dNch dη min
⟨ dNch dη
⟩ 〈vin2 〉 resth 〈vmeas2 〉 resobs
0− 10 3000 1880 2322 2.37% 0.767 2.60% 0.844 10− 20 1880 1350 1603 5.16% 0.934 5.14% 0.955 20− 30 1350 960 1147 7.29% 0.956 7.23% 0.973 30− 40 960 660 802 8.86% 0.957 8.73% 0.976 40− 50 660 440 545 9.72% 0.946 9.65% 0.972 50− 60 440 280 355 9.81% 0.914 9.85% 0.955 60− 70 280 170 221 9.02% 0.826 9.25% 0.905 70− 80 170 100 133 7.39% 0.639 8.18% 0.784 80− 90 100 50 74 5.44% 0.393 7.29% 0.608 90− 100 50 0 28 3.56% 0.167 8.09% 0.400
The figure also shows the ‘expected’ values of the observed ⟨ vobs2 ⟩ , calculated per
each centrality class as 〈vin2 〉 times the expected resolution resth, calculated from the input values of v2 and multiplicity using eq.3.8 (see tab.4.2).
ηdN/d 0 500 1000 1500 2000 2500 3000
> %
2
< v
0
2
4
6
8
10 > measured2<v > input2<v
> (nonflow)2 <v∆
Figure 4.9. Measured and simulated v2 vs dN/dη for the 50k Hijing + AfterBurner events (see tab.4.2), nonflow affects are calculated as the difference between the two values (for a comparison with experimental results at lower energy, see the reference [139]).
Nonflow effects are responsible for the larger magnitude of the reconstructed vobs2 with respect to the simulated one. The same happens to the resolution of the event plane calculated from subevents (fig.4.8(b)), where we see the same syste matic effect. In middle and most central events (500 < dN/dη < 2000) the genuine elliptic flow is much higher than the nonflow contribution (Mv22 ≫ g˜, see eq.4.4) and therefore the analysis perfectly works. More peripheral events (dN/dη < 500)
74 Feasibility of the Event Plane analysis
are an example situation where Mv22 ∼ g˜2, and therefore the event plane analysis leads to an incorrect result (lower left corner of fig.4.7).
Finally, fig.4.9 shows the systematic increase in the measured values of v2 due to nonflow effects. The difference between simulated and reconstructed v2 is an estimate of the nonflow contributions to the measurements, assuming the centrality dependence of v2 is described by the hydro extrapolation. For a comparison with experimental results from STAR, see the references [139] and [140].
Chapter 5
Simulations & Results
The event plane analysis has been studied for leadlead collision at LHC by means of an elaborate set of AliRoot simulations, in order to determine its feasibility with the ALICE detector.
Using the parametrizations described in sec.1.3, particle multiplicity and elliptic flow have been extrapolated to LHC energy under three different assumptions on the impact parameter dependence of v2 (considered as the upper/lower limit of the existing predictions). Using these extrapolations, three sets of fully reconstructed GeVSim events have been produced in a few centrality classes, and the event plane analysis has been optimized including detector effects.
The study of the analysis cuts, their optimization, and the efficiency corrections are discussed in sec.5.1 and 5.2 respectively, while the results of the flow analysis of the GeVSim events are presented in section 5.3 (including an estimate of the systematic error of the measurement).
Using the extrapolation with the lowest v2 (see section 1.3), a set of Hijing events with flow AfterBurner has been simulated and fully reconstructed, leading to a complete set of data including both flow and nonflow effects. In this more rea listic scenario, the reconstructed values of v2 have been compared to the simulated ones, and the systematic effects due to nonflow have been calculated (see sec.5.4).
5.1 Efficiency study In a real experiment, the accuracy of the event plane analysis depends on the de tector performance. In the present approach, detector effects are quantified by two main ‘estimators’, which can be both studied using Monte Carlo simulations with full detector reconstruction (as provided by the AliRoot framework, see sec.2.2.2):
• the reconstruction efficiency (i.e. how many primary stable particles are actu ally reconstructed by the detector),
• and the purity of the sample (i.e. how accurate is the match between the reconstructed tracks and the simulated primary particles).
76 Simulations & Results
The aim of the present analysis is to measure both differential and integrated elliptic flow of unidentified charged primary particles produced in the interaction, therefore the track selection is optimized for selecting primary stable hadrons. We define as ‘stable’ a particle that lives long enough to reach the ALICE TPC and can be fully reconstructed [71] (i.e. π±, K±, p and p¯).
5.1.1 Efficiency & Purity For any applied cut, NESD is the total number of reconstructed AliESDtracks pas sing the cut, and N ′ESD is the number of ‘correctly reconstructed’ primary tracks passing the cut (i.e. tracks which are reconstructed from primary stable hadrons within the same pT bin of the generated particles, see below). N ′MC is the number of primary stable hadrons (π±, K±, p and p¯) generated within the acceptance of the detector (i.e. pT > 0.1 GeV/c, η < 0.9 and 0 ≤ φ < 2π 1, see sec.2.1).
(GeV/c) T
p 0 2 4 6 8 10
T
/ p T
p
∆
0.01
0.02
0.03
0.04
(GeV/c) T
p 0 2 4 6 8 10
T
p ∆
2. 5
0.1
0.2
0.3
0.4
0.5 )MC T
pESD T
(p×2.5 )
T (1+p×0.05
Figure 5.1. (a) Transverse momentum resolution, defined as 〈∆pT 〉 /pT , where 〈∆pT 〉 is the RMS of the ∆pT distribution of charged primary hadrons in the central barrel detector. (b) ∆pT as a function of pT and its linear approximation (at 2.5 RMS, ∼ 99% of the ∆pT distribution).
Fig.5.1(a) shows that the relative transverse momentum resolution of the TPC, ∆pT/pT , weakly depends on pT in the momentum range of interest (pT = 0.1 to 10 GeV/c), therefore the condition for a track to be reconstructed in the same pT bin as the simulated primary particle can be approximated as:
∆pT < w0 × (1 + pT/GeV/c), (5.1)
where ∆pT = pT (ESD)− pT (MC). The ∆pT distribution is roughly Gaussian for each pT , with a longer tail on
the right side due to the statistically larger abundance of low momentum tracks 1Detector cracks are not taken into account in the definition of efficiency and purity.
5.1 Efficiency study 77
reconstructed at a higher pT (however this effect is less then 1% for primary tracks and can be neglected).
The parameter w0 of eq.5.1 is chosen to linearly approximate the observed RMS in ∆pT as a function of pT (fig.5.1(b)) at 2.5σ (∼ 2.5RMS in ∆pT ), where 99% of the track candidates are found: w0 = 50 MeV/c (with pT expressed in GeV/c). With the parameter w0, eq.5.1 defines the minimum bin size in pT , such that the number of particles reconstructed in the wrong bin is negligible (. 1%) 2.
Eq.5.1 approximates the requirement that in the final histograms a reconstructed track enters the same pT bin as the simulated particle.
The efficiency is defined as the number of primary charged hadrons (π±, K±, p and p¯) correctly reconstructed divided by the number of primary charged hadrons generated in the acceptance:
eff = N ′ ESD
N ′MC . (5.2)
For a specific particle type, the efficiency is a detector property which depends on the detector configuration, the geometrical acceptance, the reconstruction algorithm and the applied cuts (see sec.5.1.2).
The purity is defined as the number of correctly reconstructed primary tracks divided by the total number of reconstructed tracks within the cut:
pur = N ′ ESD
NESD . (5.3)
Without considering the experimental determination of the particle identification, the purity quantifies the level of contamination of the reconstructed spectra (e.g. from secondaries and from track reconstructed at the wrong momentum). It depends on the applied cuts and on the simulated input spectra 3.
By definition, both efficiency and purity are smaller than 1 (the particles counted by numerator are a subset of the ones counted by the denominator). We will con sider efficiency and purity differentially (as a function of the transverse momentum pT ), and integrated (over the range of interest of the present analysis, i.e. between pT = 0.1 and 10 GeV/c).
The correction to the observed spectra (when the simulated one is given) is expressed by the ratio:
corr = eff pur
= N ′ESD N ′MC
× NESD N ′ESD
= NESD N ′MC
. (5.4)
In a situation of known input spectra (where efficiency and purity could be deter mined exactly), the original signal is exactly recovered by dividing the reconstructed
2Due to the limited statistic, the present analysis used a pT bin size at least twice as large as the lower limit given above.
3Both the number of secondaries and the contamination from other pT bins depend on the number of primary particles generated at each pT .
78 Simulations & Results
spectra by this factor, i.e.:
d3N
dpTdηdφ =
1
corr(η, pT , φ) × dN
obs
dpTdηdφ . (5.5)
In reality such a correction factor can only be determined by simulating a reali stic input spectra, which should be modeled with respect to observed experimental data, not available yet in ALICE. Therefore the effect of impurities is absorbed into the systematic error and the only correction applied is the detector efficiency as a function of pT .
5.1.2 Particle Composition The reconstruction efficiency (and its transverse momentum dependence) is diffe rent for different particle species, and the particle composition of the sample is not known and moreover is not constant as a function of pT .
(GeV/c) T
p 0 1 2 3 4 5 6 7 8 9 10
T dN
/d p
310
410
510
610
710
810
±pi KineTdN’/dp ESD
T dN’/dp
ESDTdN/dp
(GeV/c) T
p 0 1 2 3 4 5 6 7 8 9 10
T dN
/d p
310
410
510
610
710
KineTdN’/dp ESD
T dN’/dp
ESDTdN/dp
± K
(GeV/c) T
p 0 1 2 3 4 5 6 7 8 9 10
T dN
/d p
310
410
510
610
KineTdN’/dp ESD
T dN’/dp
ESDTdN/dp
± p
(GeV/c) T
p 0 1 2 3 4 50
0.2
0.4
0.6
0.8
1
1.2 purity efficiency
(GeV/c) T
p 0 1 2 3 4 50
0.2
0.4
0.6
0.8
1
1.2 purity efficiency
(GeV/c) T
p 0 1 2 3 4 50
0.2
0.4
0.6
0.8
1
1.2 purity efficiency
Figure 5.2. Top row: Generated and reconstructed dN/dpT spectra for pions, kaons and protons in the ALICE central barrel detector (η < 0.9). Lower row: Efficiency and purity as a function pT for pions, kaons and protons. No particle identification is involved, the cuts applied to the data are discussed in sec.5.2.
Using the Monte Carlo information from the simulations, both efficiency and purity can be studied for different particles separately, showing their different pT dependence. Fig.5.2 shows the generated and reconstructed pT spectra of pions, kaons and protons produced at η < 0.9 (top row), and their reconstruction effi ciency and purity (lower row).
5.1 Efficiency study 79
Since the aim of the present analysis is the characterization of elliptic flow of unidentified charged particles, the efficiency corrections are calculated from the overall spectra of reconstructed tracks, without involving the effects of particle iden tification. Not knowing the particle composition and the shape of the dN/dpT dis tribution for each particle, a way to determine the systematic error of this procedure is by looking at the differences between different predictions for heavy ion events at LHC.
(GeV/c) T
p 0 1 2 3 4 5 6 7 8 9 10
ef fic
ie nc
y
0
0.2
0.4
0.6
0.8
1
GeVSim Hijing GeVSimHijing
(GeV/c) T
p 0 1 2 3 4 5 6 7 8 9 10
pu rit
y
0
0.2
0.4
0.6
0.8
1
GeVSim Hijing GeVSimHijing
Figure 5.3. Overall efficiency (a) and purity (b) in the ALICE central barrel detector (η < 0.9) as a function pT for two sets of simulations, Hijing and GeVSim, the absolute value of the difference is shown as well. The same set of cuts is applied to both the samples (see sec.5.2).
The simulations presented in this chapter are produced from two different sce narios, the particle ratio of the GeVSim events (tab.5.2) are calculated with an im plementation of the thermal model for particle production (Thermus [144]), while the particle composition of the Hijing events (tab.5.4) is determined by its internal implementation of QCD interactions and hadronization processes [104].
Figure 5.3 shows the overall efficiency and purity as a function pT for the two different inputs (Thermus and Hijing). The difference between the two gives an estimate of the systematic error for the a priori unknown particle ratio. However, the difference is very small due to the fact that in both models the majority of particles are pions, and the amount of protons and kaons is only of the order of 10% (this prediction is supported by experimental data from RHIC, see for instance [45]).
The figure also shows that the reconstruction efficiency rapidly saturates to its maximum (effmax ∼ 90%) for pT & 1 GeV/c. This is determined by the dominating contribution from pions and a similar behaviour is also observed for protons (see fig.5.2), while the efficiency of kaons reconstruction saturates only at pT ≃ 2 GeV/c due to decays 4.
4Part of the K mesons with low momentum decays before reaching the TPC (the mean lifetime of the charged kaon is τK± ≃ 1.238× 10−8 sec, giving a cτK± ≃ 3.7 m). At large momentum (for
80 Simulations & Results
The systematic error on the efficiency due the unknown particle ratios is cal culated from the difference between the efficiencies of the two sets of simulations (Hijing and GeVSim). See sec.5.2.2 for the details.
5.1.3 Multiplicity (in)dependence Figure 5.4 shows that, in agreement with the ALICE PPR [71], the efficiency is almost constant with respect to the particle multiplicity. However, comparing the efficiency of peripheral, midcentral and central events (i.e. from dN
dη ∼ 100 to
∼ 2000 tracks per unit rapidity, according to the extrapolation given in sec.1.3) a systematic decrease in the reconstruction efficiency as a function of pT can be observed over all the range of interest of the present analysis (0.1 < pT < 10 GeV/c).
(GeV/c) T
p 0 1 2 3 4 5 6 7 8 9 10
M C
/N ’
ES D
N ’
0
0.2
0.4
0.6
0.8
1
~ 2000 (c.c.0+1)ηdN/d ~ 500 (c.c.3)ηdN/d ~ 100 (c.c.4+5)ηdN/d
 low
η dN/d highη dN/deff∆
Figure 5.4. Track reconstruction efficiency as a function of pT at three multiplicities dN dη
∣∣∣ η<0.5
≃ 100, 500, 2000. The absolute value of the difference between the highest and the lowest multiplicity samples is shown as well (data from the GeVSim simulations, see tab.5.1 for the details).
In the present analysis the efficiency is calculated as an average over all pro duced events. Therefore, the difference in efficiency between the lowest and highest multiplicity events (which is of the order of a few %) is added to the systematic error on the calculated efficiency. For more details, see sec.5.2.2.
5.1.4 Main Vertex The nominal η acceptance of the TPC is from −0.9 to 0.9 for collisions at z = 0 (center of the TPC). However, for the geometrical arrangement of the beam crossing, the collision can happen anywhere in an ‘interaction diamond’, i.e. in
pT > mK±/c) the kaon lifetime becomes significantly Lorentz dilated.
5.1 Efficiency study 81
a lenght of about 30 cm along the z axis (see sec.2.3.1). As the location of the pri mary vertex changes on an event base, the η acceptance o the ALICE TPC is also different for each event. An event at the edge of the interaction region, with main vertex at z = 15 cm, will see the TPC with an acceptance −0.93 . η . 0.85 (see sec.2.1.2).
η 1 0.5 0 0.5 1
ef fic
ie nc
y
0
0.2
0.4
0.6
0.8
1
Figure 5.5. Reconstruction efficiency as a function of the pseudorapidity for a fixed main vertex position (empty markers) and for a vertex position gaussianly distributed in the ‘in teraction diamond’ (−15 . z . 15 cm). The plot is obtained from 2 sets of 1000 fully reconstructed GevSim events, generated with a flat dN/dpTdη spectra.
Considering events with a Gaussian distribution of the main vertex position, the overall efficiency rapidly drops above η ≃ 0.85, as we see in fig.5.5. The figure shows the efficiency of primary particles reconstruction, as a function of the pseu dorapidity, for two different sets of simulations, produced with fixed and variable primary vertex position. The first set has fixed main vertex position at z = 0, the second has the main vertex randomly located along z (in a Gaussian distribution with σz = 5.3 cm).
In a realistic case (the latter), a symmetric η cut should be used in a region of flat efficiency (e.g. η < 0.85), to avoid introducing an artificial asymmetry in the event. However, the simulations presented in this chapter have been produced with a fixed primary vertex position at z = 0, and moreover the η dependence of v2 is parametrized flat on the full η range (see chap.1.3).
Therefore, in the present analysis, only a sharp cut at η < 0.9 is applied (to include the widest TPC range with a ‘uniform’ tracking efficiency). The measure ments are averaged over all the detectable pseudorapidity interval, and the main vertex position has not been taken into account for the calculation of the systematic error.
82 Simulations & Results
5.2 Cut optimization Cuts are studied with respect to the reconstruction efficiency and the purity of the sample, using as input the full set of GeVSim and Hijing simulations (see sec.5.3 and 5.4).
The aim of the cuts is to isolate primary particles, to allow a clean reconstruc tion of the differential shape of v2(pT ) without any further correction and without loosing too much statistic, in order to obtain a good balance between statistical and systematic error.
Detector signal
Our main interest is in measuring elliptic flow of unidentified charged tracks re constructed in the ALICE central barrel detectors. Therefore, the first cut con sist in selecting tracks reconstructed in the TPC 5, in the pseudorapidity range of full coverage (η < 0.9). In addition we want the track fit to be propagated (at least) to the ITS, so that the extrapolation to the primary vertex becomes more re liable (δV TX . 100µm, see below). Fortunately the efficiency drops less than 5% when requiring the ITS signal in association to the TPC (see chap.5 of the ALICE PPR [25]).
Since no particle identification is needed for the measurement of unidentified particles flow, the outermost detectors of the central barrel (TRD and TOF) are not part of the present analysis (however, if a particle reaches them, the Kalman filter includes them in the track fit improving the precision, see sec.2.3). Due to the larger distance from the interaction and the smaller θ coverage, requiring a TRD and TOF signal would introduce a strong cut in pT and η and the overall efficiency would be dramatically reduced from 80 − 90% to less than 60% (see chap.5 on the ALICE PPR [25]).
Constrainability condition, fit χ2 and number of fit points/max (TPC) A first selection of primary tracks is realized by the constrainability condition, where a track is defined ‘constrainable’ if the main vertex of the collision can be included as a fit point.
In the reconstruction code, the constrainability of tracks is tested at the third pass of the fit procedure (see 2.3), when the track is refitted from its outermost point inward. The main vertex is fed to the Kalman filter as an additional space point and, if the fit succeeds with an ‘acceptable’ 6 χ2, the track is labeled as constrainable and the constrained parameters of the track are updated with the last refit. The
5In the present analysis, only full tracks are considered (for which, by definition, the tracking algorithm starts from the TPC, see sec.2.3). Track segments reconstructed by other detector (e.g. ITS ‘tracklets’) are not taken into account.
6From the χ2 distribution (fig.5.6(a)) we see that is ‘acceptable’ a value of χ2 < 77 (i.e. χ . 8.8σ′, where σ′ is the uncertainty on the primary vertex position).
5.2 Cut optimization 83
2χ 0 10 20 30 40 50 60 70 80 90
2 χ dN
/d
1 10
210
310
410
510
610
710
810
MCN’ ESDN ESDN’
2χ 0 5 10 15 20 25 30 35 40 45 500
0.2
0.4
0.6
0.8
1
1.2 )ESD/NESDPurity (N’
)MC/N’ESDEfficiency (N’ ESD/dNESDdN’
Figure 5.6. (a) dN/dχ2 of all constrainable AliESDtracks (with TPC + ITS signal) and all constrainable primaries. The full histogram represent the differential distribution, while the upper lines represent the integrated number of track for any given cut on the fit χ2. The number of simulated primaries is shown as well. (b) Integrated efficiency and purity as a function of the cut on the fit χ2. The total purity is not very sensitive to the applied cut, instead the ratio primaries/all track reconstructed at each χ2, provides a more sensitive estimator.
constrainability of the track is a necessary condition to ensure that an extrapolation of the track’s parameters exists in the proximity of the interaction point.
The constrainability condition alone is very efficient in removing secondary tracks, while a more strict requirement on the fit χ2 does not considerably improves the purity of the selection (see fig.5.6(b)).
However, the ratio between constrainable tracks and constrainable primaries as a function of the fit χ2 (the ratio dN ′ESD/dNESD at each bin of the dN/dχ2 dis tribution), becomes 50% at χ2 = 20, i.e. less than half of the constrainable tracks reconstructed with χ2 ≥ 20 actually comes from primary particles. Therefore, in the present analysis, the cut χ2 < 20 has been applied (a wider cut will increase the background more than the signal).
Due the low sensitivity of the efficiency with respect to this cut, the fit χ2 has not been included in the calculation of the systematic error.
A onetoone comparison between reconstructed and simulated particles shows a non negligible contribution of ‘double counted’ (or splitted) tracks. The experi mental precision of the tracking device and the accuracy of the reconstruction algo rithm can cause a single particle going trough the TPC to be reconstructed twice, producing two different track candidates in the AliESD.
This applies both to curved (low momentum) tracks at η ∼ 0, which spiral back toward the primary vertex, and to straight (high momentum) tracks flying across different detector elements which are not perfectly aligned.
A strategy developed at STAR [129] to suppress this effect is to apply a cut over
84 Simulations & Results
fit/max TPC 0 0.2 0.4 0.6 0.8 1
) TP
C
m ax
N fitN
dN /d
(
1 10
210
310
410
510
610
710
810 MCN’ ESDN ESDN’
fit/max TPC 0 0.2 0.4 0.6 0.8 10
0.2
0.4
0.6
0.8
1
1.2 )ESD/NESDPurity (N’
)MC/N’ESDEfficiency (N’
Figure 5.7. (a) dN/d(Nfit/Nmax) of all constrainable AliESDtracks (with TPC + ITS signal) and all constrainable primaries. The full histogram represent the differential distribution, while the upper lines represent the integrated number of track for any given cut onNfit/Nmax in the TPC. The number of simulated primaries is shown as well. (b) Efficiency and purity as a function of the cut on Nfit/Nmax in the TPC.
the number of fit points (from which the track candidate is interpolated) normalized by the number of clusters that the track could produce in the detector. The actual number of space points used for the track fit Nfit is stored in the AliESDtrack object (see sec.2.3) and, in addition, the reconstruction algorithm uses a helix parametriza tion of the track to estimate the number of clusters Nmax that a particle flying along the reconstructed trajectory would give in each detector element. This is particu larly important in the TPC, where the number of sensitive elements is large and the fit of each track can include up to 160 space points (see 2.1.2).
no Cuts TPC+ITS Constr. <202χ >0.6TPC max fit DCA<0.5mmt
0.5
0.6
0.7
0.8
0.9
1
Purity GeVSim Purity Hijing Efficiency GeVSim Efficiency Hijing
Figure 5.8. Total efficiency and purity (in the range 0.1 < pT < 10 GeV/c, η < 0.9) for all the applied cuts, showing the results for both the GeVSim and the Hijing simulations (see sec.5.3 and 5.4 respectively).
5.2 Cut optimization 85
A cut on the ratio Nfit/Nmax > 0.6 in the TPC helps in removing the contribu tions from double counted tracks and slightly improves the purity by ∼ 1% (see fig.5.8). However, both the efficiency and the purity show a very flat dependence with respect to this cut (see fig.5.7); even a 10− 20% systematic error on the value Nfit/Nmax has a negligible effect on the calculated efficiency. Therefore this cut has not been included in the calculation of the systematic error (see sec.5.2.2).
Fig.5.8 summarizes the cuts applied in the present analysis, showing the inte grated efficiency and the purity of the selection passing all cuts, separately for the GeVSim and the Hijing simulations. On top of the basic set of cuts (tracks with both TPC and ITS signal, with at least 60% of the TPC clusters included in the fit and a constrained χ2 < 20), further cuts can be applied to enhance the purity of the track candidates, e.g. a cut on the distance of closest approach to the main vertex (see below).
Transverse DCA
The excellent resolution of the ITS (see sec.2.1.1) allows an extrapolation of the track to the main vertex with a precision of the order of 100µm in the y − x plane, depending on the momentum of the track and on the number of reconstructed clus ters, and a bit worse in the z direction 7.
The extrapolated distance between the fitted track and the event’s main vertex is called Distance of Closest Approach (DCA). A Gaussian fit of the DCA distribution of primaries in the transverse plane gives σtDCA = 160 µm (see fig.5.9), while a fit on the z direction gives σzDCA = 430 µm. Due to their intrinsically different precision they are usually considered separately, and the much better resolution of the DCA in the transverse plane suggest the latter as a good parameter for selecting primary particles.
Figure 5.9(a) shows the transverse DCA distribution for all constrainable 8 tracks and for constrainable primaries, together with a half Gaussian fit of the latter (with fixed peak position at 0). The integrated efficiency and purity as a function of the tDCA cut are shown in fig.5.9(b), the purity of the selection is not very sensitive to the applied cut, while the efficiency rapidly drops for a tDCA cut smaller than few hundreds µm.
In the present analysis, a cut at 500µm (∼ 3σtDCA ) has been applied. Together with the other cuts, this condition results in an integrated purity of primaries higher than 95%. The detailed pT dependence of the purity for both the GeVSim and the Hijing sample is shown in fig.5.3(b).
The systematic uncertainty connected to this cut is calculated by assuming an inprecision of ±100 µm on the reconstructed tDCA, as if the tDCA distribution obtained from the simulation does not correctly reproduce the one measured in the
7See sec.2.3 and the ALICE PPR [25] (at section 5.1.6.3). 8The main vertex is included in the fit.
86 Simulations & Results
DCA (cm)t 0 0.05 0.1 0.15 0.2 0.25 0.3
D CA
t dN
/d
1 10
210
310
410
510
610
710
810 MCN’ ESDN ESDN’
= 0.016 cm)σfit (
DCA (cm)t 0 0.02 0.04 0.06 0.08 0.10
0.2
0.4
0.6
0.8
1
1.2 )ESD/NESDPurity (N’
)MC/N’ESDEfficiency (N’
Figure 5.9. (a) dN/dtDCA of all constrainable AliESDtracks (with TPC + ITS signal, χ2 > 20 and Nfit/Nmax > 0.6) constrainable and primaries. The full histogram represent the differential distribution, while the upper lines represent the integrated number of track for any given cut on the transverse DCA (the number of simulated primaries is shown as well). The experimental resolution on the measured tDCA is obtained through a Gaussian fit of the transverse DCA distribution of reconstructed primary particles (σtDCA ≃ 160µm). (b) Efficiency and purity of primaries with respect to the transverse DCA cut.
real experiment. The reconstruction efficiency has been calculated separately for two choices of the tDCA cut (tDCA < 500 ± 100 µm) and the difference is taken as an estimate of the systematic error (see sec.5.2.2).
Low pT cut and extrapolation
The magnetic field in the ALICE central barrel introduces a low pT cut in the detec tor acceptance. Low pT particles (pT . 100 MeV/c for pions) are curved enough to barely reach the TPC, and therefore the track reconstruction efficiency becomes almost zero in the pT region below 100 MeV/c (see sec.2.1).
The strategy applied in the present analysis is to limit the measurements to the pT range above 100 MeV/c, and after having measured the particle yield and ap plied the efficiency corrections, extrapolate the measurements down to pT = 0.
The content of the first bin of the dN/dpT histogram is estimated as a fraction of the total integral of the reconstructed spectrum: assuming the cumulative dN/dpT spectrum of charged ‘stable’ hadrons is known, is possible to calculate the ratio between the number of particle produced with pT < 100 MeV/c (N1) and the number of particle produced between pT = 0.1 and 10 GeV/c (Na):
nlow = N1 Na
=
∫ 100MeV/c 0
dN dpT
dpT∫ 10GeV/c 100MeV/c
dN dpT
dpT . (5.6)
In the present analysis, the ratio nlow has been calculated exactly from the (known) input spectra, the values are 0.0406 for GeVSim (with input spectra given by eq.5.10)
5.2 Cut optimization 87
and 0.0436 for Hijing. The difference between the two values is used to estimate the systematic error of the method due to the ‘a priori’ unknown shape of the observable dN/dpT spectrum (see sec.5.2.2).
For a small uncertainty on the ratio nlow, the statistical error associated to this procedure is comparable to the error of a counting experiment (σN =
√ N ), and
therefore, to the statistical error of any other pT bin:
σN1 = nlow × σNa ≃ N1 Na
× √ Na =
N1√ Na
< N1√ N1
= √ N1 . (5.7)
In a real experiment, the ratio nlow should be obtained from an accurate fit of the reconstructed spectra (the fit can be done just once, assuming the centrality class dependence of the spectra is negligible) and the calculated nlow could be used as a reconstruction parameter for recovering the dN/dpT spectrum from the observed data corrected by the efficiency.
The extrapolation of the dN/dpT spectrum could be done using a Levy distribu tion, as suggested by some recent studies at RHIC [41]:
1
pT
dN
dpT = A · 1
(1 + pT/(n · T ))n . (5.8)
Or, more precisely, using a weighted sum of three Levy distributions (for π, K and p) with mT in place of pT , since the observed dN/dpT spectrum is actually the sum of the spectra of all charged stable particles.
A slightly modified version of eq.5.8, incorporating the particle mass depen dence, has been used to generate all pT spectra of the GeVSim events (see sec.5.3).
The extrapolations of the particle spectrum and the elliptic flow to low pT (see sec.5.3.3) are an essential step to calculate the integrated v2. The result of the ex trapolations, for both the GeVSim and the Hijing samples, are shown in sec.5.3.4 and 5.4.3 respectively.
5.2.1 Final corrections One of the goals of the present analysis is to measure the integrated elliptic flow 〈v2〉 at midrapidity. This is achieved by taking the average cos [2 (φi −Ψ)] in the kinematic range covered by the detector. The track reconstruction efficiency is not constant as a function of η, pT and φ, therefore some corrections are needed to provide a measurement which is not biased by the detector itself.
Efficiency corrections are applied under the assumption that the total momentum spectra of reconstructed tracks d3N/d~p can be factorized into the three familiar components η, pT and φ. This assumption is not completely true, e.g. straight (high pT ) track can easily escape through a crack between two segments in the TPC without being detected at all, while more curved tracks (lower pT ) could spiral back into the sensitive volume of the TPC and release enough hits to be reconstructed. However, a full 3D study of the efficiency would require a much higher statistic than the one available.
88 Simulations & Results
(GeV/c) T
p 0 1 2 3 4 5
T dN
/d p
510
610
710
810
MC’TdN/dp ESD’TdN/dp ESDTdN/dp
(GeV/c) T
p 0 1 2 3 4 50
0.2
0.4
0.6
0.8
1
1.2 efficiency purity
Figure 5.10. (a) Simulated and reconstructed dN/dpT spectra of the full set of simulations (Hijing + GeVSim). (b) Final efficiency and purity correction factors as a function of pT , calculated over the full set of simulations.
• Since the pseudorapidity dependence of v2 is assumed to be flat, η correc tions are not taken into account but just an acceptance cut is applied (see the previous sections).
• The geometrical arrangements and the magnetic field in the central barrel in troduce a non flat pT dependence of the efficiency. This is particularly impor tant in the low pT region (pT < 1 GeV/c) where, due to the exponential shape of the dN/dpT distribution, most of the particles are produced, and where the differential shape of v2(pT ) is definitely not flat (see chap.1). Therefore, the pT dependence of the efficiency needs to be taken into account when calculat ing the integrated v2 (i.e. the integral of v2(pT ) convoluted with the corrected dN/dpT spectra, see sec.3.2.5).
• The azimuthal segmentation of the active elements in the main tracking device (the TPC, see sec.2.1.2) causes a periodic drop in the azimuthal dependence of the efficiency, at φ = n× 2π/18 (see fig.3.2(a)). As described in sec.3.2.4, the implementation of the flow analysis code already incorporates a correction of the dN/dφ distribution, i.e. φ weights are used for the determination of the ~Q vector.
The efficiency corrections as a function of pT are calculated by means of the full set of simulations produced for the present analysis (Hijing and GeVSim with Ther mus), using the ‘optimal’ set of cuts discussed above. This is done to incorporate some systematic effect due to the ‘a priori’ unknown particle composition as dis cussed in sec.5.1.2: the difference between the two sets is added to the systematic error (see sec.5.2.2).
The pT dependence of the reconstruction efficiency (for the two cases, Hijing and GeVSim) are shown in fig.5.3. The combined result (the efficiency correction
5.2 Cut optimization 89
factor that is used in the analysis) is shown in fig.5.10(b). The integrated efficiency (under the applied set of cuts) for particles between 0.1 and 10 GeV/c is 〈eff〉 = 67%. The integrated purity is 〈pur〉 = 95.7%.
As described in the previous section, the corrections are applied for pT > 100 MeV/c, while the low pT part of the spectrum (pT < 100 MeV/c) is extrapolated as a fraction of the observed dN/dpT distribution corrected by the efficiency.
Considering also the first pT bin, the total reconstruction efficiency is 〈efftot〉 = N ′MC/N
′ ESD = 64.3% (this number is used to scale up the reconstructed multiplicity
for the plot of v2 vs dN/dη, see fig.5.24 and 5.29).
5.2.2 Systematic Error The systematic uncertainty on the efficiency (as a function of pT ) is calculated by varying the most sensitive observables pointed out in section 5.1 (i.e. the particle composition and the multiplicity dependence), and the applied cuts (limiting the discussion to the transverse DCA only, see sec.5.2).
GeV T
p 0 1 2 3 4 5
% ef
f σ
0
2
4
6
8
10 GeVSimHijingp.con.σ
η100p/η 2000p/mult.σ mµm600µ 400DCAtσ
DCAt 2σ + mult.
2σ + p.con. 2σ
(0  100 MeV/c) lown
σ
Figure 5.11. Systematic error on the efficiency, calculated from the uncertainty on the particle composition (sec.5.1.2), the difference in particle multiplicity (sec.5.1.3), and the applied cut on the tDCA.
Each contribution is obtained from the absolute difference in the calculated effi ciency between two (extreme) cases: the systematic uncertainty due to the unknown particle composition is the difference between the two simulated inputs (sec.5.1.2), the uncertainty due to the multiplicity dependence is the difference between the lowest and highest multiplicity events (sec.5.1.3)), the uncertainty due to the ap plied DCA cut is the difference between a DCA cut at 400 µm and 600 µm (±100 µm around the chosen value of 500 µm). The total systematic uncertainty σeff is calculated by adding quadratically the three contributions (see fig.5.11):
σeff = √ σ2mult. + σ
2 p.con. + σ
2 tDCA. (5.9)
90 Simulations & Results
The systematic error on the extrapolation of dN/dpT between 0 and 100 MeV/c is calculated as the difference in the extrapolation parameter nlow between the two sets of simulations (see above): σnlow/nlow ≃ 7%. It is comparable with the sy stematic error at low pT on the calculated efficiency (for comparison, the value is shown as the first pT bin of fig.5.11).
The systematic uncertainty on the reconstruction efficiency σeff is used to esti mate the systematic error on the measured v2 (see sec.5.3.5).
5.3 Genuine flow reconstruction (GeVSim) This section presents the results of the event plane analysis performed over three different sets of fully reconstructed ALICE events simulated with GeVSim, each set based on a different extrapolation of the centrality dependence of elliptic flow (as presented in sec.1.3).
5.3.1 Simulations details Events are produced in six centrality classes, with particle multiplicity and width listed in tab.5.1.
The main vertex position has been fixed at (x, y, z) = (0, 0, 0) (see sec.5.1.4). The magnetic field, measured at the center of the ALICE solenoid, is ~B = 0.4 Tesla.
Table 5.1. Summary table of the 3 sets of GeVSim simulations. Per each centrality class (c.c.), the input values of v2, the particle multiplicity (and width), and the number of pro duced events are listed.
c.c. dNch dη
± σ 〈vLDL2 〉 Nevts ⟨ vhydro2
⟩ Nevts
⟨ vhydro22
⟩ Nevts
0 1922± 300 0 1k 0 0 0 0 1 1619± 290 5.15 1k 2.35 2.5k 3.5 1.7k 2 1013± 200 11.4 0.4k 5.87 1k 8.81 0.5k 3 617± 160 12.95 0.5k 8.04 1k 12.06 0.5k 4 213± 90 8.8 2k 9.55 2k 14.33 0.8k 5 42± 30 2.75 16k 7.63 7.5k 11.44 6.7k
Particle composition
The particle composition has been calculated using Thermus, a ROOT implementa tion of the thermal model for particle productions [144]. The chemical freezeout temperature has been set to Tch = 170 MeV/c and the baryon chemical potential to
5.3 Genuine flow reconstruction (GeVSim) 91
Table 5.2. Total and relative relative particle abundances calculated with Thermus (input of the GeVSim simulations).
p.type (%/tot) P.Id. m GeV/c2 %/tot %/‘stable’ h±
pions π+ 0.13957 22.5398 39.36 72.2% π− 22.5452 39.37
π0 0.13498 27.0965 0
kaons K+ 0.49368 4.05139 7.07 16% K− 4.04341 7.06
K0S 0.49765 3.9437 0
K0L 3.9437 0
nucleons p 0.93827 2.05554 3.59 8.2% p¯ 2.03286 3.55
n 0.939565 2.05242 0
n¯ 2.02883 0
hyperons Λ0 , Λ¯0 1.11568 1.937515 0 3.6% Σ+ , Σ¯− 1.18937 0.528322 0
Σ− , Σ¯+ 1.19745 0.515834 0
Ξ− , Ξ¯+ 1.3217 0.311066 0
Ξ0 , Ξ¯0 1.3148 0.315868 0
Ω− , Ω¯+ 1.6724 0.057987 0
heavy mesons φ0 1.101945 0.557406 0
µB = 10 MeV/c (the calculation was done for hadrons only). The resulting particle abundances are listed in tab.5.2.
The relative ratios of the three type of charged primary hadrons considered in the analysis are 78.7% π±, 14.1% K±, 7.1% p and p¯. All events of this set of simulations have been produced with the same particle ratios and input spectra, while the total multiplicity and the magnitude of elliptic flow are assigned with respect to the centrality class.
pT and η spectra
In order to reproduce a realistic particle spectrum in the momentum range of interest (0 < pT < 10 GeV/c), the simulated d3N/d~p distribution (expressed in the three familiar components pT , η and φ) has been customized with a user defined formula, similar to the Levy distribution in mT [41], convoluted with a flat distribution in rapidity y (which leads to a non flat pseudorapidity distribution) and an azimuthal distribution (with flow) generated by GeVSim.
92 Simulations & Results
(GeV/c) T
p 0 1 2 3 4 5 6 7 8 9 10
T /d
p i
dN
0 1 2 3 4 5 6 7 8 9 10
±pi ±K
pp ,
η 1.5 1 0.5 0 0.5 1 1.5
η /d i
dN
1.5 1 0.5 0 0.5 1 1.5
±pi ±K
pp ,
Figure 5.12. Input spectra, dN/dpT (a) and dN/dη (b), of the GeVSim simulations for the three stable charged hadrons (π±, K±, p and p¯) with the relative ratios given in tab.5.2.
The input dN/dpT spectrum is given by the equation:
dN
dpT = A · pT
(1 + (mT −m)/(nM · T0))nM , (5.10)
where mT = √ m2 + p2T is the transverse mass, T0 is the slope parameter (tempera
ture) and nM = n/mα is the modified slope variation parameter. This phenomeno logical term introduces a weak dependence of the slope variation on the particle mass, so that the the tail of the dNi/dpT distribution becomes particle species de pendent (through mi) and a single slope variation parameter n can be used to repro duce the spectra of all particles. The parameters of eq.5.10 are tuned by a fit of the generated spectra of pions, kaons and protons produced by Hijing. The obtained values are T0 = 125 MeV, n = 5 and α = 0.11.
Fig.5.13 shows a fit of the dN/dpT spectra generated by Hijing (the fit is limited to the interval 0 < pT < 3 GeV/c). The fit works quite well at low pT , but it fails to reproduce the correct slope of the tail of the distribution for pT & 4 − 5 GeV/c. However, fig.5.3 shows that the reconstruction is not very sensitive to the shape of the input spectra, especially at high pT : the difference in efficiency (purity) between the two inputs (Hijing and GeVSim), due to the effects of bin migration and particle composition, is smaller than a few %.
To save computing time, the range of the simulations has been limited to the central pseudorapidity interval, around the coverage of the ALICE central barrel detectors (−1.3 . η . 1.3).
Elliptic flow v2
Table 5.1 summarizes the simulated values of 〈v2〉 for the three different parametri zations (named LDL, hydro and hydro2).
5.3 Genuine flow reconstruction (GeVSim) 93
(GeV/c) T
p 0 0.5 1 1.5 2 2.5 3 3.5 4
T /d
p i
dN ±pi ±
K p p ,
Figure 5.13. Fit of the dN/dpT spectra generated by Hijing, using eq.5.10 (the fit is limited to the interval 0 < pT < 3 GeV/c). The y axis is in arbitrary units, and the relative height of the spectra is not proportional to the generated particle ratios.
The differential shape of v2(pT ) is linearly rising, with saturation value at pT = 2 GeV/c. Integrating over the given input spectra of π±, K±, p and p¯ (see also sec.4.2) the saturation values of v2 are given by vsat2 = ki→s 〈v2〉, with ki→s = 3.85.
The number of simulated events is chosen so that v22 × dN/dη ×Nevts is about constant 9, this should give roughly the same statistical error on the measured v2 in each class. The ‘constant’ value (v22 × dN/dη × Nevts ∼ 3000) is determined by the available resources and CPU time. The resulting statistical error is comparable to the systematic uncertainty (see sec.5.3.5).
5.3.2 Event plane determination and resolution study The width of the reconstructed event plane with respect to the true one is described by the resolution parameter (see sec.3.2.1).
As an example, fig.5.14 shows the ∆Ψ distribution (modulo π) for three cen trality classes (central, midcentral and peripheral events) of the hydro simulations (see tab.5.1). In the upper part of the figure the difference between the reconstructed event plane and the simulated reaction plane is plotted (∆Ψtrue = Ψtrue − Ψobs2 ), in the lower part the difference between η subevents (∆Ψηsub = ΨA2 −ΨB2 , with A and B equal multiplicity η subevents 10).
The width of the ∆Ψtrue distribution is not very sensitive to the applied cuts, becoming slightly worse if no cuts are applied. However, in the latter case, the observed ∆Ψsub2 distributions are narrower due to azimuthal correlations between secondary particles (such as decay products) and this can lead to an overestimate of the event plane resolution (see below).
9Due to some failed simulations, this is not always the case (see tab.5.1). 10In absence of nonflow effects the result does not depends on the choice of the subevents.
94 Simulations & Results
)pi (mod. trueΨ  obs2Ψ 8060 4020 0 20 40 60 800
100
200
300
400
500
Kine ESD no cuts ESD constr.
dca cuttESD
hydro c.c.1
)pi (mod. trueΨ  obs2Ψ 8060 4020 0 20 40 60 800
50 100 150 200 250 300 350
Kine ESD no cuts ESD constr.
dca cuttESD
hydro c.c.3
)pi (mod. trueΨ  obs2Ψ 8060 4020 0 20 40 60 800
200 400 600 800
1000 1200 1400
Kine ESD no cuts ESD constr.
dca cuttESD
hydro c.c.5
)pi (mod. subη2Ψ ∆ 806040 20 0 20 40 60 800
50
100
150
200
250
300 Kine ESD no cuts ESD constr.
dca cuttESD
)pi (mod. subη2Ψ ∆ 806040 20 0 20 40 60 800
50
100
150
200
250
Kine ESD no cuts ESD constr.
dca cuttESD
)pi (mod. subη2Ψ ∆ 806040 20 0 20 40 60 800
100 200 300 400 500 600 700 800 900
Kine ESD no cuts ESD constr.
dca cuttESD
Figure 5.14. Upper row: ∆Ψtrue distributions for centrality class 1, 3 and 5 of the hydro simulations (see tab.5.1), using different track selections. Lower row: ∆Ψη−sub2 distribu tions. The full histogram represent the ∆Ψ distribution calculated from the KineTree of all generated primary particles.
Using the iterative procedure implemented in the analysis code (see sec.3.2.1), the event plane resolution is extrapolated from the observed ∆Ψsub2 . The iteration is based on eq.3.8, here rewritten for n = 2:
res2 = ⟨ cos [ 2(Ψobs2 −Ψtrue)
]⟩ =
√ π
2 √ 2 χ2e
− χ2 2 4 × [I0(χ22/4) + I1(χ22/4)] , (5.11)
where In are modified Bessel functions of order n, and χ2 = v2/σ, σ = √
1 2M
〈w2〉
〈w〉2 .
For unitary weights (wi = 1), χ2 = v2/ √ 2M .
When the extrapolation is done using only primary particles from the KineTree, the result is in perfect agreement with the ‘true’ resolution Ψtrue −Ψobs2 .
The optimization of the cuts for the reconstruction of the event plane is done by comparing the observed event plane resolution with the ‘ideal’ one, calculated by feeding the input values of M ′ and v′2 into eq.5.11. Note that M ′ is the multiplicity used for the calculation of the event plane, i.e. all reconstructible primary particles in the ALICE central barrel (M ′ = 1.8×dN ′/dη), v′2 is the integrated elliptic flow of all primary π±, K±, p and p¯. Figure 5.15 shows the observed event plane resolution (calculated from ∆Ψη−sub2 ) with respect to the centrality class for the three sets of GeVSim events, using different track selections.
The observed resolution becomes lower using more strict cuts because of the
5.3 Genuine flow reconstruction (GeVSim) 95
centrality class 1 2 3 4 5
)]> tr
ue Ψ
 2 Ψ
< co
s[2 (
0.2
0.4
0.6
0.8
1
true (Kine) ESD no cuts ESD constr.
dca cuttESD
LDL
centrality class 1 2 3 4 5
)]> tr
ue Ψ
 2 Ψ
< co
s[2 (
0.2
0.4
0.6
0.8
1
true (Kine) ESD no cuts ESD constr.
dca cuttESD
hydro
centrality class 1 2 3 4 5
)]> tr
ue Ψ
 2 Ψ
< co
s[2 (
0.2
0.4
0.6
0.8
1
true (Kine) ESD no cuts ESD constr.
dca cuttESD
hydro2
Figure 5.15. Observed event plane resolution with respect to the centrality class (see tab.5.1 for the simulations details), using different track selections. The ‘ideal’ event plane resolu tion are shown as well (obtained from the generated distribution of cos (2 [Ψtrue −Ψobs2 ])).
centrality class 1 2 3 4 5
)]>
tr ue
Ψ 
 w
gt Tp 2
Ψ <
co s[2
(
0.2
0.4
0.6
0.8
1
true (Kine) ESD no cuts ESD constr.
dca cuttESD
LDL
centrality class 1 2 3 4 5
)]>
tr ue
Ψ 
 w
gt Tp 2
Ψ <
co s[2
(
0.2
0.4
0.6
0.8
1
true (Kine) ESD no cuts ESD constr.
dca cuttESD
hydro
centrality class 1 2 3 4 5
)]>
tr ue
Ψ 
 w
gt Tp 2
Ψ <
co s[2
(
0.2
0.4
0.6
0.8
1
true (Kine) ESD no cuts ESD constr.
dca cuttESD
hydro2
Figure 5.16. Observed event plane resolution, calculated using pT weights in the defini tion of ~Q, versus centrality class (see tab.5.1 for the simulations details). The plot shows the results using different sets of cuts on the AliESD, the ‘ideal’ values of the event plane resolution are shown as well (from cos (2 [Ψtrue −Ψobs2 ])). reduced statistic (lowerM ), however if no condition is applied to exclude secondary tracks, the observed resolution can be higher than the true one. The effect is more visible in peripheral (low multiplicity) events, where the resolution is far from its saturation (see bin 5 of fig.5.15(a) and (b)). The constrainability condition alone (for TPC + ITS tracks) is enough to obtain a resolution very close to the ‘ideal’ values.
A better event plane resolution is achieved by using pT weights in the calcu lation of the ~Q vector (see sec.3.2.3). The use of pT weights, in fact, reduces the contribution of tracks at low pT , where the purity is lower (see sec.5.1). For the same reason, the resolution becomes less sensitive to the applied cuts (see fig.5.16).
From the above study we can conclude that the best resolution is achieved by selecting constrainable TPC + ITS tracks and using pT weights.
96 Simulations & Results
c.c.5
c.c.1
c.c.4
c.c.2
c.c.3
M / M
∆Res
obs
0.92
0.94
0.96
0.98
1
1.02
1.04
1.06
0.8 0.9 1 1.1 1.2
Figure 5.17. Effects on the fullevent resolution (∆Res = resobs2 /restrue2 ) of an uncorrectly reconstructed multiplicity, for five different combinations of v2 and M (from the hydro parametrization, see tab.5.1).
From the expression of the ~Q vector (eq.3.3) we may argue that the presence of randomly distributed secondaries and double counted tracks does not affect the direction of the reconstructed event plane. The two averages:
〈cos (nφi)〉 , 〈sin (nφi)〉 (5.12) lead to the same central values either by adding randomly distributed φ angles (〈cos (nφrnd)〉 ∼ 0), or by doubling each term (as it would happen if every track is reconstructed twice).
A possible problem may arise from the fullevent plane resolution. The resolu tion of subevents, calculated from the difference between ΨA2 − ΨB2 (see eq.3.10), is safely under control because the average direction of Ψ2 does not change in pres ence of impurities. But the calculation of the fullevent resolution involves the observed multiplicity (eq.3.8), and a larger M would result in an overestimate of the resolution (and therefore, an underestimate of the measured v2).
For a few values of elliptic flow and multiplicity, fig.5.17 shows how the reso lution changes with respect to the fraction of impurity in the sample (values v2 and M are taken from the hydro parametrization, see tab.5.1).
The integrated purity 11 of the basic selection (constrainability condition of TPC + ITS tracks) is 90% (see fig.5.8). If the purity is weighted with pT (using the same weight as in the calculation of ~Q), the integrated purity becomes ∼ 93%, leading to a systematic error on the observed resolution smaller than 4% in the worst case (peripheral events).
11Integral of the purity convoluted with the observed pT spectra.
5.3 Genuine flow reconstruction (GeVSim) 97
GeV/c T
p 0 0.5 1 1.5 2 2.5 3
%
2
v
0
5
10
15
20
25
) Kine T
(p2v ) ESD T
(p2v Linear fit
) input T
(p2v GeV/c
T p
0 0.5 1 1.5 2 2.5 3
T dN
/d p
0
500
1000
1500
2000
2500
3000
3500 310×
Kine T
dN/dp ESD
T dN/dp
ESD T
dN/dp eff 1
Levy fit
Figure 5.18. (a) Linear fit of the reconstructed v2 as a function of pT (eq.5.14), with extra polation to pT = 0. The input value and the KineTree result are also shown. (b) Evaluation of the first pT bin (0 < pT < 100 MeV/c) and the associated error from the efficiency corrected dN/dpT spectrum, through the factor nlow (see eq.5.6), a Levy fit of the corrected spectra is also shown (eq.5.8). The full histogram represents the simulated spectrum, the lower set of data is the observed spectrum after the cuts (see sec.5.2) without efficiency correction. Those plots are taken from the centrality class 2 of the hydro simulations (see tab.5.1).
5.3.3 Differential flow of charged particles The shape of v2 as a function of pT is an important observable for determining the properties of the Equation of State (see sec.1.3.4). Moreover the study of elliptic flow with respect to the transverse momentum is needed for the evaluation of the integrated v2.
For pT bins small enough (i.e. in the order of the detector resolution), the recon struction efficiency can be considered roughly constant within each bin and there fore the differential shape of v2 versus pT can be measured without taking into account efficiency corrections.
According to the event plane analysis method (see sec.3.2 and [123]), v2(pT ) is obtained dividing the measured vobs2 by the event plane resolution, calculated as the average cos
[ 2 ( ∆Ψsub2
)] over the centrality class:
v2(pT ) = vobs2 (pT )
〈res2〉c.c. =
〈cos [2(φ−Ψ2)]〉pT bin⟨ cos [ 2(Ψobs2 −Ψtrue)
]⟩ c.c.
. (5.13)
Due to the high purity of the track selection (see fig.5.10(b)), no other systematic corrections are applied to the measured elliptic flow. The (small) effect of impurities is incorporated into the systematic error (see sec.5.3.5).
A linear fit going through the origin (fig.5.18(a)) is used to extrapolate the mea surement of v2(pT ) down to pT < 100 MeV/c:
v2(pT ) = a× pT . (5.14)
98 Simulations & Results
The fit interval is pT ∈ (0.1, 2) GeV/c, according to the input of the simulations (see sec.1.3.4).
In a real experiment, where the differential shape of v2(pT ) is not linear (see for example fig.1.13 [45, 64]), the extrapolation of v2(pT ) to pT = 0 could be also ap proximated linearly, due to the very small notcovered pT range and to the physical constraint v2(0) = 0. The limited number of particle produced at pT < 100 MeV/c (3.9% of the total 12 in the present parametrization of GeVSim, and 4.2% in Hijing) ensures that an uncertainty up to 20% on the extrapolated v2pT<100MeV/c would give an error on the integrated v2 smaller than 1%.
GeV/c T
p 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5
%
2
v
1 0 1 2
3 4
5 6 ) ESD
T (p2 v
Linear fit ) input
T (p2 v
LDL c.c.0
GeV/c T
p 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5
%
2
v
0
5
10
15
20
25 LDL c.c.1
GeV/c T
p 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5
%
2
v
0
10
20
30
40
50 LDL c.c.2
GeV/c T
p 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5
%
2
v
0
10
20
30
40
50 LDL c.c.3
GeV/c T
p 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5
%
2
v
0 5
10 15 20 25 30 35 40
LDL c.c.4
GeV/c T
p 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5
%
2
v
0
5
10
15
20
25
30 LDL c.c.5
Figure 5.19. Reconstructed v2 as a function of pT for the six centrality classes of the LDL sample, including the most central events with vin2 = 0. The input and a linear fit of the reconstructed data are also shown.
The three figures (fig.5.19, 5.20 and 5.21) show the reconstructed shape of v2 as a function of pT in the interval 0 < pT < 5 GeV/c, for the three sets of GeVSim simulations. The input values and a linear fit of the data are plotted as well. Only one set of simulations has been produced for the centrality class 0 (most central events, with v2 = 0), and it is shown in fig.5.19 together with the LDL sample.
As expected, the measured v2 is in perfect agreement with the input values as long as the event plane resolution is close to 1 (which is mostly the case). For the most peripheral events (c.c.5), due to the larger fluctuation in multiplicity (∼ 80%, see tab.5.1), the difference between particlewise and eventwise average is not neg
12Charged, ‘stable’ hadrons: π±, K±, p and p¯.
5.3 Genuine flow reconstruction (GeVSim) 99
) ESD T
(p2 v Linear fit
) input T
(p2 v
GeV/c T
p 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5
%
2
v
0 2
4
6 8
10 12
hydro c.c.1
GeV/c T
p 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5
%
2
v
0
5
10
15
20
25 hydro c.c.2
GeV/c T
p 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5
%
2
v
0 5
10 15 20 25 30 35
hydro c.c.3
GeV/c T
p 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5
%
2
v
0 5
10 15 20 25 30 35 40 45
hydro c.c.4
GeV/c T
p 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5
%
2
v
0 5
10 15 20 25 30 35 hydro c.c.5
Figure 5.20. Reconstructed v2 as a function of pT for the five centrality classes of the hydro sample. The centrality class 0 plot has not been repeated (see fig.5.19).
) ESD T
(p2 v Linear fit
) input T
(p2 v
GeV/c T
p 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5
%
2
v
0 2 4 6 8
10 12 14 16
hydro2 c.c.1
GeV/c T
p 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5
%
2
v
0 5
10 15 20 25 30 35 40
hydro2 c.c.2
GeV/c T
p 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5
%
2
v
0
10
20
30
40
50 hydro2 c.c.3
GeV/c T
p 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5
%
2
v
0
10
20
30
40
50
60 hydro2 c.c.4
GeV/c T
p 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5
%
2
v
0
10
20
30
40
50 hydro2 c.c.5
Figure 5.21. Reconstructed v2 as a function of pT for the five centrality classes of the hydro2 sample. Centrality class 0 is omitted (see fig.5.19).
100 Simulations & Results
ligible. The resolution is calculated from the event averaged cos ( 2∆Ψsub2
) , while
vobs2 is calculated from the particle averaged 〈cos (2 [Ψ2 − φ])〉. Higher multiplicity events add more particle with a larger vobs2 , but the event plane resolution (calcula ted as the average over all the events in the centrality class) gives all the events the same weight causing an overcorrection of the observed v2 and a consequent over estimate of the measured elliptic flow. This effect could be corrected by calculating the resolution as a weighted average over the events, with weights proportional to the selected multiplicity. However the effect is smaller than the statistical error on the measurements, and therefore it has not been taken into account.
5.3.4 Integrated v2 The integrated elliptic flow is calculated as the average between reconstructed va lues of v2 versus pT (see sec.5.3.3) weighted by the number of particle reconstructed at each pT of the dN/dpT distribution:
〈v2〉 = 1 Ntot
∑ pT bins
v2(pT )× dN obs
dpT × eff(pT ) . (5.15)
As explained in sec.5.2.1, the measured pT spectrum must be first corrected by the reconstruction efficiency of the selected sample (see also eq.3.18).
centrality class 1 2 3 4 5
> %
2 <
v
0
2
4
6
8
10
12
14 > ESD2<v
>2 <vsysσ
> Kine2<v
LDL
centrality class 1 2 3 4 5
> %
2 <
v
0
2
4
6
8
10
hydro
centrality class 1 2 3 4 5
> %
2 <
v
0
2
4
6
8
10
12
14
16 hydro2
Figure 5.22. Integrated v2 with respect to the centrality class for the three sets of GeVSim simulations. The plot shows the reconstructed 〈v2〉 from the AliESDs and the results of the event plane analysis on the KineTrees, statistical and systematic errors are also shown (see sec.5.3.5).
The number of particle with pT < 100 MeV/c is extrapolated as fraction of the total integral of dN/dpT (see sec.5.2):
NpT<100MeV/c = nlow × 1
eff Nobs , (5.16)
where Nobs/eff is the integral of the efficiency corrected spectrum observed at pT > 100 MeV/c, and nlow = 0.0406 (see eq.5.6). The result is shown in fig.5.18(b), together with the input spectrum from the KineTree for comparison.
5.3 Genuine flow reconstruction (GeVSim) 101
A linear fit is used to extrapolate the v2 measurement down to pT = 0 (see fig.5.18(a)). The mean value of v2 in the first bin is calculated from the fit function, evaluated at the mean value of the dN/dpT distribution between 0 and 100 MeV/c (calculated from the fit of the efficiency corrected pT spectrum, see eq.5.8).
Finally, figure 5.22 shows the integrated v2 with respect to the centrality class for the three sets of GeVSim simulations. As we can see, the simulated values of 〈v2〉 are well reproduced within the statistical (and systematic) error.
5.3.5 Systematic and Statistical Error on the measured v2 The only source of systematic error on the differential shape of v2 as a function of pT is due to the presence of impurities in the reconstructed spectra (‘impurities’ in each pT bin includes both secondary particles and primary particles reconstructed at a different pT ).
As shown in fig.5.23(a), at low transverse momentum (pT . 1 GeV/c) sec ondary particles have a larger v2 than primary particles (in a decay, the mother par ticle produces two daughters with roughly the same flow of the mother but a lower momentum), therefore the presence of contaminations increases the measured va lue of v2 at lower pT . The opposite effect (contamination with a lower v2) can also happen due to binshift, however the effect is completely negligible with respect to the statistical fluctuations (see fig.5.23(a), at pT > 2 GeV/c).
The systematic error on v2(pT ) is calculated from the difference ∆v2 between the measured v2 of correctly reconstructed primary particles and the measured v2 of the contamination found in the final ESD, weighted by the purity of the selection in each bin. The relative difference ∆v2/v2 is large only in the first few bins (pT < 500 MeV/c).
The overestimate on the measured v2 at low pT due to impurity can be expressed as:
vmeas2 (pT ) = pur(pT )× v′2(pT ) + (1− pur(pT ))× v′′2(pT ) . (5.17) The relative systematic error on the measured v2 is therefore obtained as:
v′2(pT )− vmeas2 (pT ) v′2(pT )
= (1− pur)× ∆v2 v′2
(pT ) . (5.18)
Weighting this contribution by the purity of the selected track sample, only the leftmost bin (100 < pT < 200 MeV/c), where where the magnitude of v′2 is small and the contamination is large, shows a big systematic error σv2/v2 ≃ 6.5% (see fig.5.23(b)). Otherwise the error is in the order of 1−2% on almost all the pT range of interest, becoming negligible for pT > 800MeV/c.
As a consequence, the integrated v2 is hardly affected by this level of contami nation, however it is possible to calculate an upper limit for the systematic error on
102 Simulations & Results
(GeV/c) T
p 0 0.5 1 1.5 2 2.5
sa t
2 /v 2
v
0
0.2
0.4
0.6
0.8
1
primaries2v secondaries2v
2 v∆
2 / v2 v∆
(GeV/c) T
p 0 0.5 1 1.5 2
2
v σ
0
0.1
0.2
0.3
0.4 2/v2vσ
2/v2 v∆ 1purity
Figure 5.23. (a) Reconstructed v2/vsat2 as a function of pT for all primary particles and for reconstructed secondaries in the ESD, the difference between the two and the relative contribution to the measured v2 are shown as well. Since the effect is similar for any input value of v2, this plot is produced from the whole set of GeVSim simulations, scaling each centrality class by its saturation v2. (b) Systematic error on the measured v2, calculated as (1− pur)×∆v2/v2 (the calculated impurity is also shown).
〈v2〉 due to the presence of impurities:
σtotv2 =
√∑ NpTσ
2 v2 (pT )
Ntot . 2.4% , (5.19)
where the sum has been limited to the interval 0.1 < pT < 1 GeV/c.
The systematic error on the integrated v2 is dominated by the uncertainty on the efficiency (as a function of pT ), calculated in sec.5.2.2.
The relative systematic error σ〈v2〉/ 〈v2〉 on the integrated flow is calculated as the difference σ〈v2〉 =
∣∣〈v2〉+ − 〈v2〉−∣∣, where: 〈v2〉± = 1
Ntot
∑ pT bins
v2(pT )× dN obs
dpT × (eff(pT )± σeff) , (5.20)
divided by the (measured) central value of 〈v2〉. This also includes the systematic uncertainty on the extrapolation of dN/dpT
between 0 and 100 MeV/c, where the two extremes are given by N1 ± 7% (see sec.5.2.2).
The result of this procedure is: σ〈v2〉 〈v2〉 =
1
〈v2〉 ∣∣〈v2〉+ − 〈v2〉−∣∣ ≃ 0.126 , (5.21)
which implies a systematic uncertainty on the central 〈v2〉 value of ±6.3%.
5.3 Genuine flow reconstruction (GeVSim) 103
Figure 5.22 shows that, with the number of events available, the systematic error σ〈v2〉 is large but comparable to the statistical error, calculated as vRMS2 /
√ Nobs
(where vRMS2 = √〈(〈v2〉 − v2)2〉).
However the statistical error associated to the present measurements is probably underestimated, due to the fact that the simulations in each centrality class have been produced with a fixed input value of 〈v2〉. Therefore the width of the v2 distribution within each centrality class is smaller than in a real experiment.
An upper limit on the statistical error on the integrated flow, with respect to the number of events available, is given by:
σstat < max(vRMS2 )√
Nevts , (5.22)
where vRMS2 = √ 〈(〈v2〉 − v2)2〉 is the spread of v2 within a single event.
The maximum spread in v2 is 200% (from particles maximally correlated to the event plane, to particles maximally anticorrelated), and the smallest multiplicty considered in the present analysis is 40 particles per unit rapidity (see tab.5.1 and 5.3), which gives a minimum of about 50 correctly reconstructed primary particles per event in the TPC volume 13. Therefore, the upper limit vRMS2 is max(vRMS2 ) = 4%, giving a σstat < 0.04/
√ Nevts.
The upper limit of the relative statistical error on v2 is given by (for 〈v2〉 ≥ 1%):
max(σstat/v2) = 4
v2(%) √ Nevts
< 4√ Nevts
, (5.23)
which becomes less than 4% as soon as 10.000 events are available, and σstat < 0.4% for Nevts = 1.000.000 (one day of ALICE run). We can compare eq.5.23 with the values listed in tab.5.5 (see also the discussion in sec.5.5).
5.3.6 Conclusions Fig.5.24 shows the integrated v2 with respect to the charged multiplicity at mid pseudorapidity (corrected by the total reconstruction efficiency) for the three sets of GeVSim simulations.
From this plot we can see how well the event plane analysis at ALICE can distinguish between different models describing the underlying physics of elliptic flow, in relation to the error associated to the measurement (both statistical and systematic errors are shown).
However, in these simulations nonflow effects are absent or very low (no jet correlations are there, only decays). In a real experiment they are expected to give a large contribution in the low multiplicity region (see sec.4.1 and 5.4).
13This number is approximately given by N ′TPC ∼ 1.8× dNdη × eff, with an efficiency about 64% (see sec.5.2.1).
104 Simulations & Results
η/d chdN
0 200 400 600 800 1000 1200 1400 1600 1800 2000
>
2 <
v
0 0.02 0.04 0.06 0.08
0.1 0.12 0.14
in>2 & <v meas>2LDL <v
in>2 & <v meas>
2 hydro <v
in>2 & <v meas>
2 hydro2 <v
meas>2 <vsysσ
Figure 5.24. Reconstructed value of 〈v2〉 as a function of the charged multiplicity at mid pseudorapidity for the three sets of GeVSim simulations, including statistical and systematic error. The input values of v2 and dN/dη are also shown (see tab.5.1).
5.4 Realistic scenario (Hijing + AfterBurner) In section 4.1 Hijing simulated events have been studied to quantify the nonflow correlations originating from jets and particle decays, and in section 4.3 we saw the combined effect of genuine elliptic flow and nonflow correlations.
This section will illustrate an analysis done on a realistic set of data, generated with Hijing plus the flow AfterBurner, and fully reconstructed in AliRoot.
5.4.1 Simulations details Events are produced in twelve centrality classes, each one with a fixed impact pa rameter. Particle multiplicity, its width, and the magnitudes of the integrated v2 are listed in tab.5.3.
The main vertex position is fixed at (x, y, z) = (0, 0, 0). The magnetic field, measured at the center of the solenoid, is ~B = 0.4 T.
Particle composition
The particle composition generated by Hijing is the result of its internal implemen tation of the hadronization processes [104].
Tab.5.4 shows the relative particle abundances, averaged over all the produced Hijing events. The relative ratios of the three type of charged primary hadrons considered in the analysis are 86.7% π±, 8.7% K±, 4.6% p and p¯.
A detailed study of the centrality dependence of the particle ratios has not been carried out, however, due to the implementation of Hijing as a superposition of many pp events, they are approximately constant within the statistical fluctuations of each event.
5.4 Realistic scenario (Hijing + AfterBurner) 105
Table 5.3. Details of the Hijing + AfterBurner simulations (generated separately in 12 centrality classes).
c.c. b (fm) dNch dη ± RMS
⟨ vhydro2
⟩ % Nevts
0 7.0 2528± 308 0.0 1k 1 7.5 2184± 303 1.32 2.2k 2 8.0 1860± 301 3.26 2.4k 3 8.6 1524± 295 4.75 1.4k 4 9.15 1264± 249 5.95 1.1k 5 9.7 992± 192 7.39 1k 6 10.6 652± 173 8.72 1k 7 11.5 405± 139 9.42 1.3k 8 12.2 253± 103 9.44 2.5k 9 13.1 121± 62 8.67 5.7k 10 13.6 84± 54 6.96 12k 11 14.1 43± 33 1.0 8k
pT and η spectra
The dN/dpT and dN/dη distributions generated by Hijing are shown in fig.5.25 for the three species of charged ‘stable’ hadrons considered in the analysis (the spectra in fig.5.25 are obtained as the sum over the entire sample). The pseudorapidity limits are −1.3 . η . 1.3.
(GeV/c) T
p 0 1 2 3 4 5 6 7 8 9 10
T /d
p i
dN
0 1 2 3 4 5 6 7 8 9 10
±pi ±K
pp ,
η 1.5 1 0.5 0 0.5 1 1.5
η /d i
dN
1.5 1 0.5 0 0.5 1 1.5
±pi ±K
pp ,
Figure 5.25. Hijing generated spectra of the three charged ‘stable’ hadrons (π±, K±, p and p¯): dN/pT (a) and dN/dη (b).
As we can see, the dN/dη distribution (fig.5.25(b)) is almost flat. The dN/dpT distribution (fig.5.25(a)) has a shape which can be described by eq.5.10 (see fig.5.13
106 Simulations & Results
Table 5.4. Total and relative particle abundances produced by the Hijing simulations (not all the particle species are listed, therefore %/tot does not add up to 100%).
p.type (%/tot) P.Id. %/tot %/‘stable’ h±
pions π+ 10.91 86.7 38.5% π− 10.92
π0 16.6 0
kaons K+ 1.01 8.7 10.8% K− 1.0
K0S 1.67 0
K0L 0.98
nucleons p 0.69 4.6 1.7% p¯ 0.68
n 0.68 0
n¯ 0.66
hyperons Λ0 , Λ¯0 0.73 0 1.6% Σ , Σ¯ 0.7 0
Ξ , Ξ¯ 0.16 0
Ω , Ω¯ 0.001 0
heavy mesons ρ , η , ω , φ0 13.2 0 photons γ 29 0 leptons e± , µ± , τ± 0.35 0
for comparison).
Elliptic flow v2
Unlike the simulations described in sec.4.1 and 4.3, the present set of fully recon structed events has been produced in 12 separated centrality classes, each one with a fixed value of v2 (determined by the geometry of the collision at a fixed impact pa rameter, see sec.1.3) but a not constant multiplicity, due to the fluctuations involved in the production processes (implemented in Hijing).
Elliptic flow versus centrality has been parametrized according to the hydrody namic model with lowest value of cs (see sec.1.3). Tab.5.3 summarizes the simula ted values of 〈v2〉 for the 12 centrality classes of the generated events.
The differential shape of v2(pT ) is linearly increasing up to its saturation value at psatT = 2 GeV/c (same as the other simulations). Integrating over the Hijing generated spectra of π±, K±, p and p¯, the saturation values of v2 are given by vsat2 = ki→s 〈v2〉, with ki→s = 4.49.
5.4 Realistic scenario (Hijing + AfterBurner) 107
5.4.2 Event plane and resolution
Due to the presence of nonflow effects, the ‘observed’ event plane resolution cal culated from ∆Ψsub2 is higher than the ‘true’ one (i.e.
⟨ cos [ 2(Ψobs2 −Ψtrue)
]⟩). In fig.5.26 the ‘true’ event plane resolution is compared to the ‘observed’ one(s), cal culated using two different definition of subevents (see also sec.4.3). The present results have been obtained from the KineTree of all simulated primary hadrons (π±, K±, p and p¯), fig.4.8(b) in sec.4.3 shows the same plot versus dN/dη.
centrality class 0 1 2 3 4 5 6 7 8 9 10 11
)]>
tr ue
Ψ  2
Ψ <
co s[2
(
0
0.2
0.4
0.6
0.8
1
expected (formula) )]>trueΨ2Ψtrue <cos[2(
sub)ηobserved ( observed (rndsub)
Figure 5.26. The generated distribution of cos ( 2 [ Ψtrue −Ψobs2
]) is compared to the result
of equation 3.8 (for the simulated values of M = 1.8× dN/dη and v2 respectively) and to the ‘observed’ event plane resolution, calculated from η and random subevents. The his togram shows the KineTree results versus the centrality class (see tab.5.3 for the simulations details).
The observed event plane resolution depends on the choice of the subevents, being closer to the ‘true’ one for η subevents. Therefore the fullevent resolution is extrapolated with the iterative procedure described in sec.3.2.1 using η subevents.
The same set of cuts described in sec.5.3.2 has been applied for the reconstruc tion of the event plane from the AliESDtracks, the choice of constrainable TPC+ITS tracks with no additional cuts gives the best resolution (i.e. the closest to the ‘opti mal’ one, calculated from all primary hadrons in the KineTree).
Fig.5.27 shows the observed resolution calculated from the reconstructed tracks using different sets of cuts, with and without pT weights in the calculation of ~Q2. As expected, the use of pT weights gives a higher resolution (closer to its saturation value), which better reproduces the true one.
The presence of nonflow effects is clearly noticeable when their magnitude is comparable with the magnitude of genuine collective flow, i.e. in most central and most peripheral events (first and last bins respectively).
108 Simulations & Results
centrality class 0 1 2 3 4 5 6 7 8 9 10 11
)]> tr
ue Ψ
 2 Ψ
< co
s[2 (
0
0.2
0.4
0.6
0.8
1
resolution (formula) ESD no cuts ESD constr.
DCA<0.5mmtESD
e.p. Resolution
centrality class 0 1 2 3 4 5 6 7 8 9 10 11
)]>
tr ue
Ψ  2
Ψ <
co s[2
(
0
0.2
0.4
0.6
0.8
1
resolution (formula) ESD no cuts ESD constr.
DCA<0.5mmtESD
wgt) T
e.p. Resolution (p
Figure 5.27. Observed event plane resolution versus centrality class, calculated from ∆Ψηsub2 , using different cuts on the reconstructed AliESDs. The two plots show the re sults using unitary (a) and pT weights (b) in the calculation of ~Q2. The ‘optimal’ values (i.e. the observed event plane resolution calculated from primary hadrons in the KineTree) are shown as well (see tab.5.3 for the simulations details).
5.4.3 Differential and integrated flow
The reconstruction of the differential shape of v2 is done in the same way as descri bed in sec.5.3.3. Figure 5.28 show the reconstructed shape of v2 as a function of pT in the interval 0 < pT < 5 GeV/c, for the twelve centrality classes of the Hijing + AfterBurner simulations. The input values are also shown.
The measured v2 is in good agreement with the input values in midcentral col lisions. The agreement is less accurate in the extreme cases (most central and most peripheral events) due to the fact that the magnitude of nonflow effects becomes comparable to the magnitude of the genuine elliptic flow.
The integrated v2 is calculated as in section 5.3.4. Efficiency corrections are applied to the observed dN/dpT spectrum (see sec.5.2.1), and the first bin of the dN/dpT histogram is evaluated as a fraction of the total integral of the corrected spectrum observed at pT > 100 MeV/c (see eq.5.9): N1 = nlow ×Na, with nlow = 0.0436. A linear fit of v2(pT ) is used to extrapolate the measurement of v2 down to pT = 0.
Fig.5.29 shows the integrated v2 as a function of the charged multiplicity (cor rected by the total reconstruction efficiency) for the twelve centrality classes of the Hijing + AfterBurner simulations. The statistical error on the measurements is vRMS2 /
√ Nobs, the relative systematic error due to the calculated efficiency, applied
cuts and contamination from secondaries is assumed to have the same magnitude as the one calculated for the GeVSim sample, therefore σ〈v2〉/ 〈v2〉 ≃ 6.3% (see sec.5.3.5).
As we can see, the simulated centrality dependence of elliptic flow is well repro
5.4 Realistic scenario (Hijing + AfterBurner) 109
GeV/c T
p 0 1 2 3 4 5
%
2 v
0
10
20
30
40
50 ) ESD T
(p2 v Linear fit
) input T
(p2 v
hijing+AB c.c.0
GeV/c T
p 0 1 2 3 4 5
%
2 v
0 2 4 6 8
10 12 14 16 18 hijing+AB c.c.1
GeV/c T
p 0 1 2 3 4 5
%
2 v
0
5
10
15
20 hijing+AB c.c.2
GeV/c T
p 0 1 2 3 4 5
%
2 v
0
5 10 15
20 25
30 35
hijing+AB c.c.3
GeV/c T
p 0 1 2 3 4 5
%
2 v
0 5
10 15 20 25 30 35 40 45 hijing+AB c.c.4
GeV/c T
p 0 1 2 3 4 5
%
2 v
0
10
20
30
40
50 hijing+AB c.c.5
GeV/c T
p 0 1 2 3 4 5
%
2 v
0
10
20
30
40
50 hijing+AB c.c.6
GeV/c T
p 0 1 2 3 4 5
%
2 v
0
10
20
30
40
50 hijing+AB c.c.7
GeV/c T
p 0 1 2 3 4 5
%
2 v
0
10
20
30
40
50
60 hijing+AB c.c.8
GeV/c T
p 0 1 2 3 4 5
%
2 v
0
10
20
30
40
50
60 hijing+AB c.c.9
GeV/c T
p 0 1 2 3 4 5
%
2 v
0
10
20
30
40
50
60 hijing+AB c.c.10
GeV/c T
p 0 1 2 3 4 5
%
2 v
0
20
40
60
80
100 hijing+AB c.c.11
Figure 5.28. Reconstructed v2 as a function of pT for the 12 centrality classes of the Hijing + AfterBurner simulations. The input and a linear fit of the reconstructed data are also shown.
110 Simulations & Results
η/d chdN
0 500 1000 1500 2000
>
2 <
v
0
0.02
0.04
0.06
0.08
0.1
meas> 2
Hijing <v > hydroin2 <v
meas>2 <vsysσ
)2 v∆ nonflow (
Figure 5.29. Reconstructed 〈v2〉 versus dN/dη for the Hijing + AfterBurner simulations, including statistical and systematic error (see sec.5.3.5). The input values of v2 are shown as well, and from the difference between the input and the reconstructed 〈v2〉 the observed magnitude of nonflow effects is drawn.
duced within the statistical error on a wide range of centrality classes (midcentral events). However, nonflow effects cause the reconstructed v2 to be larger than the input one, especially in very peripheral collisions. This is shown by the difference between the input and the reconstructed 〈v2〉 (see fig.5.29).
5.5 Conclusions Using the results obtained up to here, it is possible to give an overview of the known sources of experimental uncertainties affecting the elliptic flow measurement at AL ICE with the event plane method.
To correctly estimate the statistical uncertainty, it must be taken into account that the simulations presented in this chapter were produced in separate centrality classes, each with a fixed value of v2. Therefore the statistical error on v2 (calculated from the vRMS2 , see sec.5.3.5) is underestimated.
For a more reliable prediction, the statistical errors are extrapolated from a set of simulations produced with a continuum impact parameter distribution (7 < b < 14.5 fm) where v2 is assigned to each event with respect to its impact parameter, but the centrality class selection is based on the final particle multiplicity. The Kine Trees of the Hijing + AfterBurner simulations (with no detector reconstruction) presented in sec.4.3 have been used for this purpose, where the centrality depen dence of 〈v2〉 follows the hydro parametrization (see sec.1.3.2).
Events are divided into five centrality classes, defined as 20% of the total inelas tic cross section (i.e. 20% of the total integral of the Hijing multiplicity distribution,
5.5 Conclusions 111
with rescaled impact parameter 7 < b < 14.5 fm). The statistical errors on v2 ob tained in this way have been scaled to take into account the efficiency of the detector and the applied cuts (only 64.3% of the primary particles are actually reconstructed, see sec.5.2.1).
Table 5.5. Summary table of the errors associated to the elliptic flow measurement (from a sample of 50.000 minimumbias Hijing + AfterBurner events, with elliptic flow from the hydro extrapolation). Centrality classes are defined as 20% of the total inelastic cross section.
% c.s. dNch dη
〈vtrue2 〉% σstat σsys σnonflow 0− 20 > 1450 3.67% 0.04 0.20 0.12 20− 40 670− 1450 7.87% 0.03 0.43 0.10 40− 60 260− 670 9.74% 0.04 0.53 0.01 60− 80 260− 100 8.09% 0.10 0.44 0.49 80− 100 < 100 4.50% 0.38 0.25 2.88 0− 100 0 ∼ 2500 6.76% 0.05 0.37 0.25
Table 5.5 summarizes the three sources of uncertainty that have been considered in the present analysis: statistical error, systematic error, nonflow contributions. The statistical errors (σstat) listed in tab.5.5 are calculated as:
σstat = vRMS2√
eff×N c.c.evts , (5.24)
whereN c.c.evts is the number of events in each centrality class (i.e. N c.c.evts ≃ 15×50.000). Assuming that 10 minimum bias events per second are reconstructed in the ALICE central barrel detector, this corresponds to less than two hours of heavy ion run at LHC.
We immediately see that, provided few days of heavy ion run, the statistical error becomes negligible with respect to the systematic. Nonflow effects represent a large source of uncertainty only at low multiplicity (most peripheral events), while they could be neglected for midcentral events.
112 Simulations & Results
Chapter 6
Conclusions
The last part of the previous chapter gave an overview of the sources of experimental uncertainties on the measurement of elliptic flow, as developed in this thesis.
Since v2 is calculated as an averaged quantity, its statistical error scales with the square root of the number of events available (σ〈 〉 = σ/
√ N ), therefore in a few
days of heavy ion run, the statistical error will becomes negligible with respect to the systematic uncertainty and to the magnitude of nonflow effects (see sec.5.5).
The systematic error is large mainly due to the way efficiency corrections are calculated, and only a small contribution is due to the presence of impurities (which cause an overestimate of v2 at low pT ).
• A larger sample of simulated events would allow a detailed study of the effi ciency with respect to the particle multiplicity, eliminating in this way a large contribution to the systematic error, which is due to the multiplicity depen dence of the efficiency (see sec.5.1.3).
• A detailed study of the particle ratios (and their pT dependence) in PbPb col lisions at LHC energy would remove the uncertainty due to the unknown par ticle admixture, which also contributes to the systematic error the efficiency (see sec.5.1.2).
• However, only a better characterization of the ITS resolution, and the imple mentation of a fitpoints dependent cut, could reduce the systematic error due to the applied cut on the transverse DCA (see sec.5.2).
The error on the measured v2 at low pT could be reduced by increasing the pu rity of the selection (but this will alsoreduce the statistics, especially at low pT , see sec.5.3.5), or by extending the linear fit of v2(pT ) to extrapolate v2 up to 200− 300 MeV/c. However, since the actual shape of v2(pT ) is generally not linear, a better fit function should be modeled on available experimental data.
114 Conclusions
The contributions due to nonflow correlations can be large (assuming they are well described by Hijing), anyway they mainly affect peripheral events (dN/dη < 200300). At higher multiplicity, and especially in midcentral events, where the genuine elliptic flow is expected to be large, nonflow contributions become less important and they could be neglected for a preliminary flow analysis (see sec.4.3 and 5.4.3).
However, nonflow correlations cannot be completely eliminated by the event plane formalism alone, and therefore other analysis methods should be used. For this reason, both the Cumulants and the LeeYan zero methods are currently under implementation in the AliRoot environment.
Appendix A
Class Description
The following is a list of the C++ classes implemented in the AliFlow package, with a brief description of their purpose. The HTML documentation of the AliFlow package can be automatically generated from the source files (with ROOT THtml) or found on the web [131].
AliFlowEvent
The AliFlowEvent class contains global event variables, such as event and run num ber, trigger signal, and other event observables such as the signals from the ZDC or the FMD. An object array (ROOT TClonesArray class) stores the reconstructed track candidates (AliFlowtrack class, see below), and another array is filled with the reconstructed neutral secondary vertices (AliFlowV0 class).
The AliFlowEvent class inherits from the basic ROOT TObject, so that it can be chained into a TChain or written to disk in a ROOT file. Due to the reduced amount of informations that are stored, the size of an AliFlowEvent object is about 1/10 of the original AliESD.
The class implements methods to split the event into random or η subevents and to calculate eventbyevent quantities (such as ~Qn and Ψn of the full and the sub event) for a given selection of track candidates, with or without pT or η weights. The class also contain the φ weight structure as a static pointer, which has to be filled at the beginning of the analysis loop with the calculated φ weights (see sec.3.2.3). Bayesian ‘a priori’ probabilities for particle identification can be also assigned in this way (see sec.2.3.2).
The AliFlowEvent data structure enables the event plane analysis by default and the same data structure can be used to implement the Cumulants and the Lee Yan zero analysis (see sec.3.4). Some of the methods to calculate the generating function for the cumulants analysis have been ported from the StFlowEvent code to the AliFlowEvent, however they have not been tested so far.
116 Class Description
AliFlowTrack
The AliFlowTrack class summarizes the information of the AliESDtracks stored in the ESD. Data members of this class are the kinematic variables pT , η and φ, for both the constrained and the unconstrained fit of the track (see sec.2.3), together with their χ2 and distance of closest approach to the main vertex.
Track parameters are limited to the four central detectors (ITS, TPC, TRD and TOF, see sec.2.1), for each of them, the number of fit points, number of findable clusters and dE/dx signal (time signature for the TOF) are stored. The Bayesian probability for each particle hypothesis is also stored in an array 4× 5 (detectors × ALICE p.Id., see sec.2.3.2).
The class also contains a pointer to an array of boolean flags, filled during the loop for the determination of the event plane, that allows to discriminate if a track was included or not in the calculation of Ψn for a given selection (its contribution can be then subtracted from ~Qn to avoid autocorrelation effects, see sec.3.2.2). A similar structure is repeated for the subevent selection.
AliFlowV0
Neutral decay vertices can be stored as AliFlowV0 objects in a separate TClones Array in the AliFlowEvent. The AliFlowV0 class contains the kinematic variables (pT , η and φ), the V 0 position with respect to the primary vertex (decay length), the invariant mass, the most probable particle identification hypothesis and some reconstruction parameters, such as the DCA of the 2 tracks at the crossing point and the combined χ2. The AliFlowV0 also stores two pointers to the daughter tracks in the AliFlowTracks array.
AliFlowSelection
The AliFlowSelection class is used to select events, tracks for the determination of the event plane, and tracks and V 0s for the correlation analysis.
Data members of this class are integer or floating point numbers, defining the interval of acceptance for selecting:
• events (typically to select a particular centrality class, e.g. multiplicity limits at midrapidity);
• tracks for the determination of the event plane (e.g. constrainable tracks with TPC + ITS signal), more sets of cuts can be tested in a single run (see sec.3.3.1);
• tracks (and V 0s) selection for the correlation analysis (e.g. track candidates with a tDCA < 100 µm), those are the particles that enter the calculation of v2 (eq.3.6).
117
An AliFlowSelection object must be instantiated at the beginning of the analysis (previous to the flattening φ weights loop) and filled with the desired set(s) of cuts. Only cuts that are explicitly set in the AliFlowSelection object are applied to the analysis.
Once the cuts are defined, the method AliFlowSelection::Select(* TObject) re turns true or false whether the event/track/V 0 is selected. If more selections are used for the determination of the event plane, the harmonic and selection number must also be specified in the method. The selection of V 0 candidates for the correlation analysis also requires a invariant mass cut. The flow coefficients are then calculated within the specified mass range and in two equivalent sidebands, to estimate the flow of the background.
In the present thesis, the event selection is only based on the observed multi plicity (see sec.4.3). The optimal cuts for the determination of the event plane are optimized to achieve the best resolution (see sec.5.3.2), and the cuts applied for the correlation analysis of charged particles are optimized for the selection of primaries (see sec.5.2).
AliFlowAnalyser
The AliFlowAnalyser class performs the event plane analysis over the AliFlow Events (see fig.3.4), and produces a default set of histograms summarizing the re sults.
The AliFlowEvent loop has to be implemented externally, providing more fle xibility (such as the possibility to perform on the fly analysis while looping on the AliESDs). The class is instantiated at the beginning of the event loop, and an Ali FlowSelection object must be provided to apply the required cuts (the flattening φ weight histograms can also be loaded at this step).
The whole execution is driven by three methods:
Init is called just once at the beginning to initialize the histograms and set the analysis parameters (e.g. use of pT weights, choice of the subevents);
Make is called per each AliFlowEvent in the loop, it performs the event selection, the determination of the event plane(s) of the full and the subevents, and fills the profile histograms of vobs2 and cos(∆Ψsub2 );
Finish concludes the analysis by calculating the global resolution with the subevents method (average is taken over the all the selected events), and by correcting the observed flow coefficients. If the efficiency histogram versus pT is pro vided, it also calculates the integrated flow.
All the analysis histograms are saved in a ROOT file. Therefore, both the reso lution and the efficiency corrections can be also applied in a later stage.
118 Class Description
AliFlowConstants
The namespace AliFlowConstants has the purpose to store static data members that do not need to be changed during the analysis, e.g. the number of selections in use for the event plane determination, the number of bins of the various histogram, the definitions of centrality classes. Any change on those numbers require the AliFlow package to be recompiled.
AliFlowMakers
The AliFlowMaker class is the interface between the ALICE event summary data and the AliFlowEvent, i.e. a parser that reads the useful values from AliESD objects and organizes them into the AliFlowEvent structure.
The AliFlowKineMaker class is the interface between the kinematic tree pro duced by the event generator and the AliFlowEvent. The AliFlowKineMaker is not a fast event simulator (no smearing is applied on the original particles kinematic, no detector information is produced), it just creates clean AliFlowEvent objects that can enter the same analysis chain as the reconstructed events. Most of the data members of the AliFlowEvent are left empty or filled with dummy values (100% p.Id. probability, fit χ2 = 1, ...).
This approach has been very useful to test the functionalities of the event plane analysis on an ideal input, without going through the full reconstruction chain of AliRoot (which can be very time consuming, see sec.2.2.2): an event generator is used to generate events with the chosen flow and particle multiplicity (transport is switched off in AliRoot), an the produced KineTrees of particles with exact mo mentum, production vertex and particle Id., are converted into AliFlowEvents and submitted to the analysis chain (this is the approach used in chapter 4).
Some very wide quality cuts are applied at this step:
• only AliESDtracks with TPC signal are taken from the AliESD; • only primary particles, or secondaries associated to an AliESDtrack (if ‘la
bels’ are available), are imported from the KineTree. Both the ‘flow makers’ can be used on the fly, creating the AliFlowEvents and
directly submitting them to the flow analysis, or the ‘maker’ phase can be splitted from the analysis ‘phase’, by storing the AliFlowEvents to disk.
In the latest developments both the AliFlowMaker and AliFlowKineMaker have been embedded in an AliAnalysisTask or an AliSelector (see below).
AliFlowTask (ex AliSelectorFlow) Later developments of AliRoot have added functionalities to run a complete analysis over simulated events (see fig.3.3). The class AliSelectorRL (inherited from the ROOT TSelector), later replaced by the class AliAnalysisTaskRL (inherited from the
119
ROOT TTask, which also allows distributed analysis), performs in parallel the loop over AliESDs and KineTrees. For each reconstructed event, the AliStack and the simulated KineTree are also opened to give access to the kinematic information of the generated particles, and by using the ‘labels’ stored in the AliESDtracks, each track candidate can be compared to the simulated particle that produced the hits in the detector from which the track is fitted (see sec.2.3).
If the AliFlowMakers are executed through an AliAnalysisTaskRL, two AliFlow Events are created (from both the AliESD and the KineTree), and the connection between particles and tracks is preserved. The reconstruction efficiency and purity can be also studied at this step (see below).
EffHist, EpHist, CutEff
Few additional classes have been implemented outside the AliFlow package to study the efficiency of the track reconstruction and the effect of the applied cuts:
• EffHist is a class to study the reconstruction efficiency and purity as a function of pT , η, φ and particle type for a given set of cuts;
• EpHist is a class to study the ‘true’ and ‘observed’ event plane resolution as a function of the applied cuts;
• CutEff is a class to study the dependence of efficiency and purity with respect to some specific observables (e.g. the tDCA or the fit χ2).
Without going into the details of their implementation, the general idea is to pro vide a structure that allows to easily calculate the amount of primary and secondary particles passing a given set of cuts.
Using the Monte Carlo information from the KineTree, the sensitive distribu tions (such as dN/dpT or dN/dη) of the reconstructed tracks are ordered in a three dimensional array (4 if we also include the particle type), which dimensions are given respectively by the number of applied cuts (n selections can be used, each one sharpening the cuts), the primary condition (track reconstructed from a primary particle, from a secondary particle, from a double counted primary or from a dou ble counted secondary), and the momentum resolution (track reconstructed inside or outside the pT bin of the generated particle, see sec.5.1). The same distributions are generated also from the KineTree of primary particles.
Simple operation between the produced histograms lead to the calculation of the track reconstruction efficiency and purity as function of pT , η, applied cuts (see sec.5.1 for the definitions). Those classes have been extensively used to produce the results shown in sec.5.1, 5.2 and 5.3.2.
However, due to the recent implementation of a more general ‘efficiency frame work’ in AliRoot, they have not been included in the AliFlow package.
120 Class Description
Bibliography
[1] M.Cheng et al., The QCD equation of state with almost physical quark masses, arXiv:0710.0354 [heplat], 2007.
[2] F.Karsch and E.Laermann, Quark Gluon Plasma 3, World Scientific (2003) . [3] E.Laermann and O.Philipsen, The Status of Lattice QCD at Finite Temperature, Ann.
Rev. Nucl. Part. Sci. 53 (2003) 163. [4] F.Karsch, E.Laermann, and A.Peikert, The Pressure in 2, 2+1 and 3 Flavour QCD,
Phys. Lett. B478 (2000) 447. [5] M.Riordan and W.A.Zajc, The first few microseconds, Scientific American 294
(2006) 24. [6] Relativistic Heavy Ion Collider (RHIC) at Brookhaven National Laboratory (BNL),
http://www.bnl.gov/rhic/.
[7] M.Gyulassy and L.McLerran, New Forms of QCD Matter Discovered at RHIC, Nucl. Phys. A750 (2005) 30.
[8] P.Steinberg, Hotter, Denser, Faster, Smaller... and NearlyPerfect: What’s the matter at RHIC?, arxiv:nuclex/0702020, 2007.
[9] CheukYin Wong, Introduction to HighEnergy HeavyIon Collisions, World Scien tific Publishing Co., Singapore, 1994.
[10] K.M.O’Hara et al., Observation of a Strongly Interacting Degenerate Fermi Gas of Atoms , Science 298 (2002) 2179.
[11] S.A.Voloshin and Y.Zhang, Methods for analysing anisotropic flow in relativistic nuclear collision, Z. Phys. C70 (1996) 665.
[12] C.Adler et al. (STAR Collaboration), Elliptic flow from two and fourparticle corre lations in Au + Au collisions at √sNN = 130 GeV, Phys. Rev. C66 (2002) 034904.
[13] C.Alt et al. (NA49 Collaboration), Directed and elliptic flow of charged pions and protons in Pb+Pb collisions at 40 and 158 AGeV, Phys. Rev. C68 (2003) 034903.
[14] S.A.Voloshin, Energy and system size dependence of charged particle elliptic flow and v2/ǫ scaling, Quark Matter 2006 proceedings (2007) .
122 BIBLIOGRAPHY
[15] G. Torrieri, Scaling of v2 in heavy ion collisions, Phys. Rev. C76 (2007) 024903. [16] M.Miller and R.J.M.Snellings, Eccentricity fluctuations and its possible effect on
elliptic flow measurements, arXiv:nuclex/0312008, 2003. [17] P.F.Kolb et al., Elliptic flow at SPS and RHIC: from kinetic transport to hydrody
namics, Phys. Lett. B500 (2001) 232.
[18] P.F.Kolb and U.Heinz, Hydrodynamic description of ultrarelativistic heavyion col lisions, World Scientific QGP3 (2004) .
[19] L.D.Landau, On the multiparticle production in highenergy collisions, Izv. Akad. Nauk. Ser.Fiz.17 (1953) 51.
[20] R.J.Glauber, Lectures on Theoretical Physics, Interscience NY Vol.1 (1959) .
[21] M.L.Miller et al., Glauber Modeling in High Energy Nuclear Collisions, Ann. Rev. Nucl. Part. Sci. 57 (2007) .
[22] H.DeVries, C.W.DeJager, and C.DeVries, Nuclear ChargeDensityDistribution Pa rameters from Elastic Electron Scattering, Atomic Data and Nuclear Tables (1987) 36 (1987) .
[23] W.M.Yao et al., Review of Particle Physics, J. Phys. G33 (2006) 1. [24] The Particle Data Group, http://pdg.lbl.gov/.
[25] ALICE collaboration, ALICE Physics Performance Report, Volume 2, CERN/LHCC 30 (2005) .
[26] M.L.Miller et al., Hard scattering cross sections at LHC in the Glauber approach: from pp to pA and AA collisions, CERN Yellow Report on Hard Probes in Heavy Ion Collisions at the LHC (2004) .
[27] A.Bialas, M.Bleszynski, and W.Czyz, Multiplicity Distributions In NucleusNucleus Collisions At HighEnergies, Nucl. Phys. B111 (1976) 461.
[28] P.Jacobs and G.Cooper, Spatial Distribution of Initial Interactions in High Energy Collisions of Heavy Nuclei, arXiv:nuclex/0008015, 2000.
[29] T.Hirano and Y.Nara, Hydrodynamic afterburner for the Color Glass Condensate and the parton energy loss, Nucl. Phys. A743 (2004) 305.
[30] A. Adil et al., The eccentricity in heavyion collisions from Color Glass Condensate initial conditions, Phys. Rev. C74 (2006) 044905.
[31] J.Y. Ollitrault, Relativistic hydrodynamics, arxiv:0708.2433 [nuclth], 2007.
[32] J.D.Bjorken, Highly relativistic nucleusnucleus collisions: The central rapidity re gion, Phys. Rev. D27 (1983) 140.
BIBLIOGRAPHY 123
[33] S.A.Voloshin, Toward the energy and the system size dependece of elliptic flow: working on flow fluctuations, Conference Proceedings for the 22nd Winter Workshop on Nuclear Dynami (2006) .
[34] S.Manly et al. (PHOBOS Collaboration), System size, energy and pseudorapidity dependence of directed and elliptic flow at RHIC, Nucl. Phys. A774 (2006) 523.
[35] R.S.Bhalerao and J.Y.Ollitrault, Eccentricity fluctuations and elliptic flow at RHIC, Phys. Lett. B641 (2006) 260.
[36] H.Heiselberg and A.M.Levy, Elliptic Flow and HBT in noncentral Nuclear Colli sions, Phys. Rev. C59 (1999) 2716.
[37] I.G.Bearden et al. (NA49 Collaboration), Collective Expansion in High Energy Heavy Ion Collisions, Phys. Rev. Lett. 78 (1997) 2080.
[38] B.B.Back et al. (PHOBOS Collaboration), Identified hadron transverse momentum spectra in Au+Au collisions at √sNN = 62.4 GeV, Phys. Rev. C75 (2007) 024910.
[39] S.S.Adler et al. (PHENIX Collaboration), Identified Charged Particle Spectra and Yields in Au+Au Collisions at √sNN = 200 GeV, Phys. Rev. C69 (2004) 034909.
[40] V.Greco, C.M.Ko, and P.Levai, Parton coalescence and antiproton/pion anomaly at RHIC, Phys. Rev. Lett. 90 (2003) 202302.
[41] J.Adams et al. (STAR Collaboration), Identified hadron spectra at large transverse momentum in p+p and d+Au collision at √sNN = 200 GeV, Phys. Lett. B637 (2006) 161.
[42] P.BraunMunzinger, K.Redlich, and J.Stachel, Particle Production in Heavy Ion Collisions, arxiv:nuclth/0304013, 2003.
[43] J.P.Blaizot and J.Y.Ollitrault, Hydrodynamics Of A Quark  Gluon Plasma Undergo ing A Phase Transition, Nucl. Phys. A458 (1986) 745.
[44] J.Cleymans and K.Redlich, Unified Description of FreezeOut Parameters in Relati vistic Heavy Ion Collisions, Phys. Rev. Lett 81 (1998) 5284.
[45] J.Adams et al., Experimental and theoretical challenges in the search for the quark gluon plasma: The STAR Collaboration’s critical assessment of the evidence from RHIC collisions, Nucl. Phys. A757 (2005) 102.
[46] H.Sorge, Flavor Production in Pb(160 AGeV) on Pb Collisions: Effect of Color Ropes and Hadronic Rescattering, Phys. Rev. C52 (1995) 3291.
[47] S.A.Bass et al., Microscopic Models for Ultrarelativistic Heavy Ion Collisions, Prog. Part. Nucl. Phys. 41 (1998) 225.
[48] M.Bleicher et al., Relativistic HadronHadron Collisions in the UltraRelativistic Quantum Molecular Dynamics Model (UrQMD), J. Phys. G25 (1999) 1859.
124 BIBLIOGRAPHY
[49] S.A.Voloshin and A.M.Poskanzer, The physics of the centrality dependence of ellip tic flow, Phys. Lett. B474 (2000) 27.
[50] R.Hagedorn and J.Ranft, Statistical thermodynamics of strong interactions at high energies. 2. Momentum spectra of particles produced in p p collisions, Nuovo Ci mento Suppl.6 (1983) 169.
[51] D.Teaney, The Effect of Shear Viscosity on Spectra, Elliptic Flow, and HBT Radii, Phys. Rev. C68 (2003) 034913.
[52] R.Baier, P.Romatschke, and U.A.Wiedemann, Dissipative Hydrodynamics and Heavy Ion Collisions, Phys. Rev. C73 (2006) 064903.
[53] T.Hirano, Hydrodynamic models, J. hys. G30 (2004) S845.
[54] P.Kovtun, D.T.Son, and A.O.Starinets, Viscosity in Strongly Interacting Quantum Field Theories from Black Hole Physics, Phys. Rev. Lett. 94 (2005) 111601.
[55] H.J.Drescher et al., The centrality dependence of elliptic flow, the hydrodynamic limit, and the viscosity of hot QCD, Phys. Rev. C76 (2007) 024905.
[56] P.F.Kolb, J.Sollfrank, and U.Heinz, Anisotropic transverse flow and the quark hadron phase transition, Phys. Rev. C62 (2000) 054909.
[57] R.S.Bhalerao, J.P.Blaizot, N.Borghini, and J.Y.Ollitrault, Elliptic flow and incom plete equilibration at RHIC, Phys. Lett. B627 (2005) 49.
[58] N.Armesto, C.A.Salgado, and U.A.Wiedemann, Relating highenergy lepton hadron, protonnucleus and nucleusnucleus collisions through geometric scaling, Phys. Rev. Lett. 94 (2005) 022002.
[59] P.F.Kolb et al., Centrality dependence of multiplicity, transverse energy, and elliptic flow from hydrodynamics, Nucl. Phys. A696 (2001) 197.
[60] K.GolecBiernat and M.Wusthoff, Saturation Effects in Deep Inelastic Scattering at low Q2 and its Implications on Diffraction, Phys. Rev. D59 (1999) 014017.
[61] H.J.Drescher, A.Dumitru, and J.Y.Ollitrault, The centrality dependence of elliptic flow at LHC, Proceedings of the CERN Workshop Heavy Ion Collisions at the LHC: Last Call for Predictions (2007) .
[62] P.Huovinen, Anisotropy of flow and the order of phase transition in relativistic heavy ion collisions, Nucl. Phys. A761 (2005) 296.
[63] J.Adams et al. (STAR Collaboration), Azimuthal anisotropy in Au+Au collisions at√ sNN = 200 GeV, Phys. Rev. C72 (2005) 014904.
[64] A.Adare et al. (PHENIX Collaboration), Scaling properties of azimuthal anisotropy in Au+Au and Cu+Cu collisions at √sNN = 200 GeV, arxiv:nuclex/0608033, 2006.
BIBLIOGRAPHY 125
[65] C.Adler et al. (STAR Collaboration), Azimuthal anisotropy and correlations in the hard scattering regime at RHIC, Phys. Rev. Lett. 90 (2003) 032301.
[66] T.Hirano, Is early thermalization achieved only near midrapidity at RHIC ?, Phys. Rev. C65 (2002) 011901.
[67] T.Hirano and K.Tsuda, Collective flow and twopion correlations from a relativistic hydrodynamic model with early chemical freeze out, Phys. Rev. C66 (2002) 054905.
[68] B.B.Back et al. (PHOBOS Collaboration), The PHOBOS Perspective on Discoveries at RHIC, Nucl. Phys. A757 (2005) 28.
[69] ALICE, A Large Ion Collider Experiment, http://aliceinfo.cern.ch/.
[70] The Large Hadron Collider (LHC) at CERN, http://lhc.web.cern.ch/lhc/.
[71] ALICE collaboration, ALICE Physics Performance Report, Vol.1, CERN/LHCC 049 (2003) .
[72] ALICE Collaboration, Technical Proposal, CERN/LHCC 71 (1995) .
[73] ALICE Collaboration, Technical Proposal, Addendum1, CERN/LHCC 32 (1996) .
[74] ALICE Collaboration, Technical Proposal, Addendum1, CERN/LHCC 13 (1999) .
[75] ALICE Collaboration, Technical Design Report of the Inner tracking System, CERN/LHCC 12 (1999) .
[76] ALICE Collaboration, Technical Design Report of the TimeProjection Chamber, CERN/LHCC 01 (2000) .
[77] ALICE Collaboration, Technical Design Report of the transitionRadiation Detector, CERN/LHCC 21 (2001) .
[78] ALICE Collaboration, Technical Design Report of the TimeOfFlight Detector, CERN/LHCC 12 (2000) .
[79] ALICE Collaboration, Technical Design Report of the TimeOfFlight Detector, Addendum1, CERN/LHCC 16 (2002) .
[80] ALICE Collaboration, Technical Design Report of the HighMomentum Particle Identification Detector, CERN/LHCC 19 (1998) .
[81] ALICE Collaboration, Technical Design Report of the Photon Spectrometer, CERN/LHCC 04 (1999) .
[82] ALICE Collaboration, Technical Design Report of the ZeroDegree calorimeter, CERN/LHCC 05 (1999) .
[83] ALICE Collaboration, Technical Design Report of the Forward Muon Spectrometer, CERN/LHCC 22 (1999) .
126 BIBLIOGRAPHY
[84] ALICE Collaboration, Technical Design Report of the Forward Muon Spectrometer, Addendum1, CERN/LHCC 46 (2000) .
[85] ALICE Collaboration, Technical Design Report of the Photon Multiplicity Detector, CERN/LHCC 32 (1999) .
[86] ALICE Collaboration, Technical Design Report of the Photon Multiplicity Detector, Addendum1, CERN/LHCC 38 (2003) .
[87] ALICE Collaboration, Forward detectors technical design report, http://alice.web.cern.ch/Alice/tdr/fmdv0t0/web/, 2007.
[88] D.Evans et al. (ALICE Collaboration), The ALICE central trigger system, Real Time Conference, 14th IEEENP 410 (2005) .
[89] ALICE Collaboration, ALICE DAQ and ECS User’s Guide (ALICE Internal Note/DAQ), ALICEINT 015 (2005) .
[90] P.Fonte, A.Smirnitski, and M.C.S.Williams, A new highresolution TOF technology, Nucl. Instrum. and Methods A443 (2000) 201.
[91] ROOT, an ObjectOriented Data Analysis Framework, http://root.cern.ch/. [92] NA49, Large Acceptance Hadron Detector for an Investigation of Pbinduced Reac
tions at the CERN SPS, http://na49info.web.cern.ch/na49info/.
[93] R.Brun and F.Rademakers, ROOT: An object oriented data analysis framework, Nucl. Instrum. and Methods A389 (1997) 81.
[94] CINT, the C/C++ Interpreter, http://root.cern.ch/root/Cint.html.
[95] GCC, the GNU Compiler Collection, http://gcc.gnu.org/.
[96] STAR Collaboration (computing), STAR C++ Class library, STAR Internal Note (2006) .
[97] N.J.A.M. Van Eindhoven, GALICE The Geant based ALICE detector simulation package, ALICE/SIM 44 (1995) .
[98] R.Brun et al., GEANT3, Internal report CERN DD/EE/841 .
[99] GEANT, Detector Description and Simulation Tool, http://wwwasd.web.cern.ch/wwwasd/geant/.
[100] FLUKA, a fully integrated particle physics MonteCarlo simulation package, http://www.fluka.org/.
[101] The ALICE Offline Project, http://aliceinfo.cern.ch/Offline/. [102] X.N.Wang and M.Gyulassy, HIJING: A Monte Carlo Model for Multiple Jet Pro
duction in pp, pA and AA Collisions, Phys. Rev. D44 (1991) 3501.
BIBLIOGRAPHY 127
[103] M.Gyulassy and X.N.Wang, HIJING 1.0: A Monte Carlo Program for Parton and Particle Production in High Energy Hadronic and Nuclear Collisions, LBL34246 (2000) .
[104] HIJING Monte Carlo Model, http://wwwnsdth.lbl.gov/∼xnwang/hijing/. [105] B.Andersson et al., Parton fragmentation and string dynamics, Phys. Rep. 97 (1983)
31.
[106] H.U.Bengtsson and T.Sjostrand, The Lund Monte Carlo For Hadronic Processes: Pythia Version 4.8, Comput. Phys. Commun. 46 (1987) 43.
[107] T.Sjostrand, The Lund Monte Carlo For Jet Fragmentation And E+ E Physics: Jetset Version 6.2, Comput. Phys. Commun. 39 (1986) 347.
[108] X.N.Wang and M.Gyulassy, Gluon shadowing and jet quenching in A+A collisions at √ s = 200 AGeV, Phys. Rev. Lett. 68 (1992) 1480.
[109] S.Radomski and P.Foka, GeVSim MonteCarlo Event Generator, ALICE Note (Gesellschaft fur Schwerionenforschung, Darmstadt) (2002) .
[110] GeVSim and the Flow AfterBurner, http://radomski.web.cern.ch/radomski/. [111] R.L.Ray and R.S.Longacre, MEVSIM: A Monte Carlo Event Generator for STAR,
LANL eprint nuclex/0008009 (2000) .
[112] LCG, Computing Grid Project, http://lcg.web.cern.ch/LCG/. [113] P.Saiz et al. for the ALICE Collaboration, AliEn  ALICE environment on the GRID,
Nucl. Instr. and Methods A02 (2003) 437.
[114] AliEn, a lightweight Grid framework for ALICE, http://alien.cern.ch. [115] MonALISA Repository for ALICE, http://pcalimonitor.cern.ch/. [116] P.Billoir, Track fitting with multiple scattering, Nucl. Instrum. and Methods A225
(1984) 352.
[117] R.Fruhwirth, Application of Kalman filtering to track and vertex fitting, Nucl. In strum. and Methods A262 (1987) 444.
[118] M.AguilarBenitez, Inclusive particle production in 400 GeV pp interactions, Z. Phys. C503 (1991) 405.
[119] G.D’Agostini, Probability and Measurement Uncertainty in Physics  a Bayesian Primer, arxiv:hepph/9512295, 1995.
[120] G.D’Agostini, Bayesian Inference in Processing Experimental Data: Principles and Basic Applications, arxiv:physics/0304102, 2003.
[121] M.Botje, Introduction to Bayesian Inference, Nikhef internal notes (2006) .
128 BIBLIOGRAPHY
[122] C.Zampolli, Eventbyevent fluctuation studies in the ALICE experiment, Eur. Phys. J. C49 (2007) 309.
[123] A.M.Poskanzer and S.A.Voloshin, Methods for analysing anisotropic flow in relati vistic nuclear collision, Phys. Lett. C58 (1998) 1673.
[124] P.F.Kolb, v4: A small, but sensitive observable for heavy ion collisions, Phys. Rev. C68 (2003) 031902.
[125] P.Danielewicz and G.Odyniec, Transverse Momentum Analysis of Collective Motion in Relativistic Nuclear Collisions, Phys. Lett. B157 (1985) 146.
[126] R.J.M.Snellings et al., Novel rapidity dependence of directed flow in high energy heavy ion collisions, Phys. Rev. Lett. 84 (2000) 2803.
[127] P.Danielewicz, Effects of Compression and Collective Expansion on Particle Emis sion from Central HeavyIon Reactions, Phys. Rev. C51 (1995) 716.
[128] S.A.Voloshin, Anisotropic flow, Nucl. Phys. A715 (2002) 379. [129] The STAR experiment at RHIC, http://www.star.bnl.gov/STAR/comp/.
[130] P.Hristov and F. Carminati, The ALICE Offline Bible (Version 0.0), 2007. [131] The AliFlow package, http://www.phys.uu.nl/ simili/FloWeb/.
[132] S.Wang et al., Measurement of collective flow in heavyion collisions using particle pair correlations, Phys. Rev. C44 (1991) 1091.
[133] N.Borghini, P.M.Dinh, and J.Y.Ollitrault, A new method for measuring azimuthal distributions in nucleusnucleus collisions, Phys. Rev. C63 (2001) 054906.
[134] R.S.Bhalerao, N.Borghini, and J.Y.Ollitrault, Analysis of anisotropic flow with Lee Yan Zeroes, Nucl. Phys. A727 (2003) 373.
[135] K.Adcox et al. (PHENIX Collaboration), Flow Measurements via TwoParticle Azi muthal Correlations in Au+Au Collisions at √sNN = 130 GeV, Phys. Rev. Lett. 89 (2002) 212301.
[136] N.Borghini, P.M.Dinh, and J.Y.Ollitrault, Flow analysis from multiparticle azimuthal correlations, Phys. Rev. C64 (2001) 054901.
[137] N.Borghini, P.M.Dinh, and J.Y.Ollitrault, Flow analysis from cumulants: a practical guide, Proceedings of the International Workshop on the Physics of the QuarkGluon Plasma, Palaiseau, France, 47 Sept. 01 (2001) .
[138] Y.Bai (for the STAR Collaboration), The anisotropic flow coefficients v2 and v4 in Au+Au collisions at RHIC, arXiv:nuclex/0701044, 2007.
[139] R.J.M.Snellings for the STAR collaboration, Elliptic flow measurements from STAR, Heavy Ion Phys. 21 (2004) 237.
BIBLIOGRAPHY 129
[140] R.J.M.Snellings, Elliptic flow in Au+Au collisions at √sNN = 130 GeV, Nucl. Phys. A698 (2002) 193.
[141] STAR collaboration, Elliptic Flow from two and fourparticle correlation in Au+Au collision at √sNN = 130 GeV, Phys. Rev. C66 (2002) 034904.
[142] N.Borghini, R.S.Bhalerao, and J.Y.Ollitrault, Anisotropic flow from LeeYang zeroes: a practical guide, J. Phys. G30 (2003) S1213.
[143] N.Kolk, A.Bilandzic, J.Ollitrault, and R.Snellings, Eventplane flow analysis without nonflow effects, arXiv:0801.3915 [nuclex], 2008.
[144] S.Wheaton and J.Cleymans, THERMUS: A Thermal Model Package for ROOT, arxiv:hepph/0407174, 2004.
130 BIBLIOGRAPHY
Summary
This thesis presents a study of elliptic flow in leadlead collisions, in the context of ALICE (A Large Ion Collider Experiment), a dedicated heavy ion detector installed at the Large Hadron Collider (LHC) at CERN.
In a noncentral collision, the term ‘anisotropic flow’ refers to the azimuthal anisotropy in the momenta distribution of the emitted particles, which is usually quantified by a Fourier expansion of the d3N/d~p distribution along the direction of the ‘reaction plane’ (the plane spanned by the impact parameter and the beampipe). Elliptic flow, the second coefficient of this expansion, is denoted as v2.
In the current understanding, v2 is a key observable to study the thermodynamic properties and the Equation of State of the system created in the early stage of the collision, where the formation of the Quark Gluon Plasma (QGP) is expected: the final momentum anisotropy can be connected to the spatial eccentricity of the initial state by assuming that the constituents are strongly coupled and the system behaves as a relativistic fluid. The magnitude of v2 with respect to the eccentricity of the collision measures the strength of this coupling.
Unfortunately, this thesis was developed in a period when LHC was not yet ope rational, and therefore the work was devoted to the implementation of experimen tally driven predictions of the main observables in PbPb collisions at LHC energy, and the development of analysis tools to be used in the ALICE environment. The thesis also shows a full example of flow analysis on simulated heavy ion data, and points out the main sources of experimental uncertainties.
The expected values of elliptic flow and charged multiplicity have been extrapo lated, for PbPb collision at √sNN = 5.5 TeV, in two independent ways (the Low Density Limit approximation and the Relativistic Hydrodynamic model) producing different impact parameter dependences of the elliptic flow. These predictions have been used as an input for simulations in the ALICE offline framework, to develop and test a flow analysis code. The analysis algorithm is based on the event plane method, already successfully used for flow studies in other heavy ion experiments at lower energy, such as the Relativistic Heavy Ion Collider (RHIC) in Brookhaven, and the NA49 experiment at the Super Proton Synchrotron (SPS) at CERN.
One of the biggest experimental uncertainties in measuring flow at LHC is the magnitude of nonflow effects, i.e. azimuthal correlations between collision pro ducts not due to collective flow, and therefore not correlated with the reaction plane. Depending on the analysis method, nonflow effects can introduce a large systema
132 Summary
tic error in the flow measurement. Nonflow effects have been simulated using Hijing, a heavyion event generator which implements all known physics effects from a superposition of protonproton collisions. Comparison between the expected magnitude of elliptic flow and the estimated magnitude of nonflow contributions defines the applicability of the Event Plane analysis. The study also shows that non flow effects are less important when the genuine flow or the multiplicity are large, leading to the conclusion that only peripheral reactions are heavily affected by non flow. The event plane analysis, however, cannot completely disentangle genuine collective flow from nonflow effects, and therefore other methods should be also used (e.g. the Cumulants or the LeeYan zero methods).
A large systematic error in the calculation of the integrated v2 is related to the uncertainty on the reconstruction efficiency, due to the accuracy of the input and to the event selection. In particular, a better parametrization of the particle ratios (possibly modeled on experimental data) should be implemented in the simulations, and multiplicity dependent correction factors should be used.
However, the analysis shows that the input values of the simulations can be re constructed within an accuracy of a few %, leading to the conclusion that the ALICE experiment is an optimal environment to measure elliptic flow, and that the event plane analysis provides an easy and straightforward procedure to perform the mea surement on a wide range of centralities and therefore it can be perfectly used to perform ‘firstday’ physics analysis at ALICE.
Samenvatting
In dit proefschrift wordt elliptische stroming van deeltjes in loodlood botsingen bestudeerd met behulp van ALICE (A Large Ion Collider Experiment), een gea vanceerde zware ionen detector die geïnstalleerd is in de Large Hadron Collider (LHC) op het CERN.
In een nietcentrale botsing refereert de term anisotrope deeltjes stroom naar de anisotrope hoekverdeling in de impulsverdeling van de uitgezonden deeltjes. Over het algemeen wordt deze gekwantificeerd door de Fourierreeks ontwikkel ing van de d3N/d~p verdeling evenwijdig aan het ‘reactievlak’ te nemen (het vlak dat opgespannen wordt door de impactparameter en de bundelrichting). Elliptische stroming, de tweede component van de ontwikkeling, wordt genoteerd als v2. Vol gens de huidige opvattingen is v2 een sleutel observabele voor het bestuderen van de thermodynamische eigenschappen en toestandsvergelijking van een systeem dat zich instelt vlak na de botsing, wanneer het ontstaan van een Quark Gluon Plasma verwacht wordt: de uiteindelijke impuls anisotropie kan verbonden worden aan de ruimtelijke excentriciteit van de beginfase, door aan te nemen dat de relevante vrij heidsgraden sterk gekoppeld zijn en dat het systeem zich gedraagt als een relativis tische vloeistof, en dat de grootte van v2 ten opzichte van de excentriciteit van de botsing de sterkte van de koppeling representeert.
Helaas werd dit proefschrift vervaardigd in de periode dat de LHC nog niet in bedrijf was, als gevolg daarvan is het werk toegespitst op het implementeren van experimenteel gedreven voorspellingen van de belangrijkste observabelen in lood lood botsingen bij LHC energieën en de ontwikkeling van de analyse gereedschap pen die gebruikt moeten worden in de ALICE omgeving. Dit proefschrift bevat ook een volledig voorbeeld van de strominganalyse uit gesimuleerde zware ionen data en laat de belangrijkste bronnen van experimentele onzekerheden zien.
De verwachte waarden van de elliptische stroming en multipliciteit van de ge laden deeltjes zijn doorgerekend voor loodlood botsingen met √sNN = 5.5 TeV op twee verschillende manieren (de lage dichtheidslimiet benadering en het rela tivistische hydrodynamische model), dit resulteert in verschillende afhankelijkhe den van de impactparameter van de elliptische deeltjes stroom. Deze voorspellin gen zijn gebruikt als invoer voor de simulaties in het ALICE offline raamwerk om de analyse code te ontwikkelen en te testen. Het analyse algoritme, gebaseerd op de reactievlakmethode, is al succesvol gebruikt voor stromingstudies bij andere zware ionen experimenten met lagere energieën zoals bij de Relativistic Heavy Ion
134 Samenvatting
Collider (RHIC) in Brookhaven en bij het NA49 experiment in de Super Proton Synchrotron (SPS) op het CERN.
Een van de grootste experimentele onzekerheden in het meten van de deelt jes stroom in de LHC is de grootte van de effecten die niet het resultaat zijn van nietstroming, zoals hoekcorrelaties tussen botsingsproducten door niet collectieve stroming die daardoor niet gecorreleerd zijn met het reactievlak. Afhankelijk van de analyse methode, kunnen de nietstromings effecten een grote systematische fout in de stromingsmeting veroorzaken. Nietstromings effecten zijn gesimuleerd met behulp van Hijing, een zware ionen botsingsgenerator waarin alle bekende fysische effecten van protonproton botsingen geïmplementeerd zijn. De verge lijking tussen de verwachte grootte van de elliptische stroming en de verwachte grootte van nietstromingseffecten definieert de toepasbaarheid van de reactievlak methode. Het onderzoek laat ook zien dat nietstromingseffecten minder belangrijk zijn wanneer werkelijke stroming of de multipliciteit groot zijn, daar uit volgt de dat alleen scherende botsingen zwaar onderhevig zijn aan nietstromingseffecten. De reactievlakmethode kan echter niet gebruikt worden om de nietstromingseffecten en de stromingseffecten compleet te isoleren, daarom zullen er ook andere metho den gebruikt moeten worden (zoals de Cumulante of Lee Yang zero methode).
Een grote systematische fout in de berekening van de geïntegreerde v2 is gerela teerd aan de onzekerheid van de reconstructieefficiëntie, afhankelijk van nauw keurigheid van de data invoer en de botsingsselectie. In het bijzonder zal een betere parametrisatie van de deeltjes verhoudingen (mogelijk gebaseerd op experi mentele data) geïmplementeerd moeten worden in de simulaties en de multipliciteits afhankelijke correctie factoren zullen moeten worden gebruikt.
De analyse laat echter zien dat de invoer waarden van de simulaties gerecon strueerd kunnen worden binnen een marge van een paar procent, dit leidt tot de conclusie dat het ALICE experiment een optimale omgeving is voor de meting van elliptische stroming en dat de reactievlakmethode een makkelijke en inzichtelijke procedure verstrekt om de meting in groot bereik van centraliteiten te bewerkstel ligen, deze methode kan daarom perfect gebruikt worden in een eerste fysische analyse met behulp van ALICE.
Acknowledgements
When I finished my ‘Laurea’ thesis, in the far 2002, I was tripping about the po tentials of our science to disclose a deeper level of understanding reality. The en thusiasm of my first experience in highenergy physics made me look for a Ph.D. position, which I luckily found on ALICE. For an amazing coincidence Alice was also the name of my girlfriend at that time, as well as the girl in my favorite fairy tale. Almost five years passed, my knowledge and technical skills improved, but more than once I had the impression that I was rolling down the rabbit hole and losing myself into the deepness that fascinated me so much. What was I looking for, again?
Fortunately, the story has a happy ending. However, I would not have succeeded without the help of a few people, to which I want to address special thanks. First of all, thanks to Rene, his determination and his patience have kindly kicked me toward the end. To Raimond, for his clever and precious advises and his positive attitude to talk about anything, physics related or not, at any time. Thanks to Thomas, for hav ing accepted me in the SubAtomic Physics group in Utrecht, and for having trusted me once more by ‘extending’ my seemingly unreachable deadline. To Nick for his essential help in getting started with the ALICE framework, and for keeping alive the tradition of the ‘Physics Colloquium’ (and drinks). Thanks to Andrea (Cky) for the artistic cover of this book, to Wilko for the dutch ‘samenvatting’, to Cristian for his PhotoShop skills, to Marco for his help with the short summary, which allowed me to have a defense date. And, of course, thanks to Paul, for driving me back from NIKHEF so many times, and for the long discussions about my meaningless plots, sometimes going on for the whole trip.
Thanks also to the rest of the group, GertJan, for assigning me some of the most intriguing task I have done during my studies, such as the Van de Graaff experiment and the HISPARC project; Kees, for his suggestions about not buying such a pro blematic 64 bits laptop (which I did not listen to); Ton and Arie, to have involved me in the assembling of the ITS, and to have trusted my movie editing skills; Rene (the young one), for fixing my problematic laptop, twice.
Thanks to the other Ph.D.s who have been more or less contemporary with me, Alexey, Yuting, Sasha and Martjin (who are done), and Federica, Cristian (again), Ermes and Marek (still on their way), for many fruitful discussions and few social activities. To my first office mates who finished long before me, Garmt and Hernan, for providing a living example of Ph.D. in its terminal state. At that time I could not
136 Acknowledgements
understand the pain you were going through. For ‘par condicio’ let’s also thank Phanos, Ingrid, André, Michiel, Mikolaj,
Naomi, Ante, and the younger students, Despoina, Minko (gone), Marta, Pédzi, Merijn and Wilko (again). Last but not least, thanks to Astrid for her precious help with my integration in the Dutch environment. I hope I did not forget anyone.
Who else? Thanks to the Grid people at NIKHEF, Jeff, Ronald and David, for allowing my bugged simulations into their supercomputer. To the Torino’s group, Luciano, Francesco, Massimo, Chiara, etc. for the useful mail exchanges, and for a few amusing dinners around CERN and Shanghai. To the CERN people, in particular to Federico, Peter, Marian and Youri, for such an indefatigable devoutness to ALICE and for providing an essential helpdesk to the entire collaboration.
Thanks to the Don Gauderio fellows from the Latin American Summer School in Malargue, Teresa, Clementina, Eduardo, Felix, Michele, etc. for two unforget table weeks in the name of physics and tequila. To Antonello for the first LATEX lay out of this thesis, and to Joana, for having been the cutest officemate ever, and (hopefully) for offering me a job.
Thanks to my family, for always being a moral support, especially in my ‘down’ periods. And finally, thanks to all my friends, and girlfriends, and all the people that have been part of my Dutch life for a variable amount of time. To Stefano, for having been so brave to follow me down from a perfectly working airplane, probably I would never have started skydiving alone, and to José, for joining us as soon as he had the chance. ‘Blue sky!’, you guys. To Claudietta, for having been my personal movie star. To Eri, for being the most active partyguy ever, always HardCore! To Sandra, the Mick O’Connell will never be the same again without you sitting at the entrance. To Vanessa, for having been a bearable flatmate for so long in the peaceful Lunetten. It was nice to share such a ‘gezellig’ experience with all of you since the very beginning, I hope we will always stay in touch.
Thanks to Alessia for her unconditional happiness and her positive radiation, to Yuria for her tropical sweetness, to Pimwipa for her endurance in filling the cultural gap. Thanks to Anna for her personality and for the best barbecue place in Utrecht. To Laura for the pitstops at (once upon a time) Biltstraat 81.
Thanks to Francesco, Neile and all the ‘Giant Wombats Killed My Grandma’, you were the perfect soundtrack for one of the best periods of my life, and I am so sorry that you never became RockStars, as you used to sing. Thanks also to Gabriel, Arnaud and all the ‘New Acquisition’, not exactly my music but you still kick ass. Keep on playing.
So many people, and places, and things to do. While the rabbit hole is explored down to hell by the most clever people on earth, I wish to conclude here my trip. And home we steer, a merry crew, beneath the setting sun.
Meting van elliptische stroming met ALICE
(met een samenvatting in het nederlands)
Proefschrift ter verkrijging van de graad van doctor aan de Universiteit Utrecht op
gezag van de Rector Magnificus, prof. dr. J.C. Stoof, ingevolge het besluit van het college voor promoties in het openbaar te verdedigen op maandag
16 juni 2008 des middags te 4.15 uur
door
Emanuele Lorenzo Simili geboren op 19 mei 1976 te Milaan, Italië
Promotor: Prof. dr. R. Kamermans Copromotor: Dr. P.G. Kuijer
ISBN: 9789039348390
Copyright c© 2008 by Emanuele Lorenzo Simili. All rights reserved. Cover: ‘o ring 8’ design by Andrea Lucca (Cky), concept by Emanuele Simili.
Dit werk maakt deel uit van het onderzoekprogramma van de Stichting voor Fundamenteel Onderzoek der Materie (FOM), die financieel wordt gesteund door de Nederlandse Organisatie voor Wetenschappelijk Onderzoek (NWO).
deep down the rabbit hole
Contents
Introduction 1
1 Heavy Ion Collisions & Anisotropic Flow 3 1.1 A hot, dense, nearly perfect liquid . . . . . . . . . . . . . . . . . . 3 1.2 Initial Conditions . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.2.1 Eccentricity in Glauber MC . . . . . . . . . . . . . . . . . 13 1.3 Medium Properties . . . . . . . . . . . . . . . . . . . . . . . . . . 15
1.3.1 Low Density Limit . . . . . . . . . . . . . . . . . . . . . . 16 1.3.2 Relativistic Hydrodynamics . . . . . . . . . . . . . . . . . 18 1.3.3 Charged Multiplicity . . . . . . . . . . . . . . . . . . . . . 20 1.3.4 Differential Flow . . . . . . . . . . . . . . . . . . . . . . . 22
1.4 NonFlow correlations . . . . . . . . . . . . . . . . . . . . . . . . 23
2 Experimental Setup and Analysis Framework 25 2.1 The ALICE detector at LHC . . . . . . . . . . . . . . . . . . . . . 25
2.1.1 ITS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 2.1.2 TPC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 2.1.3 TRD and TOF . . . . . . . . . . . . . . . . . . . . . . . . 32
2.2 The OffLine Framework . . . . . . . . . . . . . . . . . . . . . . . 33 2.2.1 ROOT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 2.2.2 AliRoot and the ALICE Offline Project . . . . . . . . . . . 34 2.2.3 Event Generators . . . . . . . . . . . . . . . . . . . . . . . 36 2.2.4 AliEn and LGC . . . . . . . . . . . . . . . . . . . . . . . . 38
2.3 Track Reconstruction in the Central Barrel Detectors . . . . . . . . 39 2.3.1 Reconstruction of the primary vertex . . . . . . . . . . . . . 41 2.3.2 Particle identification . . . . . . . . . . . . . . . . . . . . . 42 2.3.3 Secondary vertices . . . . . . . . . . . . . . . . . . . . . . 43
3 Flow Analysis in ALICE 45 3.1 Aim of the Flow Analysis . . . . . . . . . . . . . . . . . . . . . . . 45 3.2 Event Plane Analysis method . . . . . . . . . . . . . . . . . . . . . 47
3.2.1 Resolution . . . . . . . . . . . . . . . . . . . . . . . . . . 48 3.2.2 Autocorrelation . . . . . . . . . . . . . . . . . . . . . . . . 49
ii CONTENTS
3.2.3 Weights . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 3.2.4 Flattening Weights and Reconstruction Efficiency . . . . . . 51 3.2.5 Differential & Integrated Flow . . . . . . . . . . . . . . . . 53
3.3 Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 3.3.1 Analysis Strategy . . . . . . . . . . . . . . . . . . . . . . . 55 3.3.2 The AliFlow package . . . . . . . . . . . . . . . . . . . . . 57
3.4 Other Analysis Methods . . . . . . . . . . . . . . . . . . . . . . . 57 3.4.1 Applicability . . . . . . . . . . . . . . . . . . . . . . . . . 60
4 Feasibility of the Event Plane analysis 63 4.1 NonFlow estimate with Hijing . . . . . . . . . . . . . . . . . . . . 63 4.2 Flow simulation with GeVSim . . . . . . . . . . . . . . . . . . . . 69 4.3 Flow + nonflow . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
5 Simulations & Results 75 5.1 Efficiency study . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
5.1.1 Efficiency & Purity . . . . . . . . . . . . . . . . . . . . . . 76 5.1.2 Particle Composition . . . . . . . . . . . . . . . . . . . . . 78 5.1.3 Multiplicity (in)dependence . . . . . . . . . . . . . . . . . 80 5.1.4 Main Vertex . . . . . . . . . . . . . . . . . . . . . . . . . . 80
5.2 Cut optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . 82 5.2.1 Final corrections . . . . . . . . . . . . . . . . . . . . . . . 87 5.2.2 Systematic Error . . . . . . . . . . . . . . . . . . . . . . . 89
5.3 Genuine flow reconstruction (GeVSim) . . . . . . . . . . . . . . . 90 5.3.1 Simulations details . . . . . . . . . . . . . . . . . . . . . . 90 5.3.2 Event plane determination and resolution study . . . . . . . 93 5.3.3 Differential flow of charged particles . . . . . . . . . . . . 97 5.3.4 Integrated v2 . . . . . . . . . . . . . . . . . . . . . . . . . 100 5.3.5 Systematic and Statistical Error on the measured v2 . . . . . 101 5.3.6 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . 103
5.4 Realistic scenario (Hijing + AfterBurner) . . . . . . . . . . . . . . 104 5.4.1 Simulations details . . . . . . . . . . . . . . . . . . . . . . 104 5.4.2 Event plane and resolution . . . . . . . . . . . . . . . . . . 107 5.4.3 Differential and integrated flow . . . . . . . . . . . . . . . 108
5.5 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
6 Conclusions 113
A Class Description 115
Summary 131
Samenvatting 133
Introduction
“[...] Thus grew the tale of Wonderland: Thus slowly, one by one, Its quaint events were hammered out  And now the tale is done, And home we steer, a merry crew, Beneath the setting sun [...] ”
Lewis Carrol
The work presented in this thesis is dedicated to the physics of highenergy heavy ion collisions, which offer a very rich playground for studying fundamental properties of strongly interacting matter, such as quarks and gluons, under extreme conditions of energy and density.
From the experimental point of view, quarks are not observed as ‘free’ particles, since the strong force keeps them confined into hadrons. Hadrons are classified as mesons, which are made of quarkantiquark pairs, and baryons, which are made of three quarks. The most common baryons are the proton and the neutrons, which are found in the atomic nuclei of all the stable matter in the universe.
The Quantum ChromoDynamics (QCD) successfully accounts for fundamental properties observed in high energy experiments and can correctly describe the spec tra and the quark configuration of all known hadrons. However, why ‘confinement’ happens in first place is still an open question of QCD, and the existence of a more crowded configurations of quarks and gluons, who behaves almost as free particles inside a confined volume, is not excluded. In relativistic heavy ion collision there’s a glimpse of the creation of such a state, known as QuarkGluon Plasma (QGP).
There are experimental evidences of the QGP, mainly based on the collective behavior of the system created in the collision, in particular on its evolution which seems to be well described by relativistic hydrodynamic. A key observable to study the thermodynamic properties of the QGP is the ‘elliptic flow’, i.e. the azimuthal anisotropy in the momenta distribution of the particles produced in the collision, which can be connected to the Equation of State of the system.
ALICE is a dedicated heavy ion detector for the reconstruction of leadlead col lisions at the Large Hadron Collider, being built between the years 2002 and 2008 at CERN. The main purpose of the ALICE experiment is to study the properties of the QGP at collision energies never achieved before.
Unfortunately, the present thesis has been developed when LHC was still under construction, and therefore the entire work presented here is based on simulations.
2 Introduction
Efforts have been devoted to both the development of parametrizations of the main observables in PbPb collisions at LHC energy, and the implementation of analysis tools interfaced to the ALICE environment.
The present thesis should be seen as a first example of physics analysis with AL ICE, to point out the possible sources of uncertainty in this kind of measurement. More accurate ways to perform the flow analysis should and will be developed in the exciting future of the experiment. Fig.1 shows a full 3D simulation of a heavy ion event, as it will be ‘seen’ by the ALICE detector.
Figure 1. 3D display of a simulated collision in ALICE (picture generated with the Event Display in AliRoot).
The thesis is organized as follows. Chapter 1 gives an overview of the theo retical background of heavy ion collisions, focusing the attention on the concept of ‘anysotropic flow’ and on the extrapolation of v2 to LHC energy. Chapter 2 presents the ALICE detector, and also the software framework used to simulate and analyse the data. In chapter 3 the event plane analysis method is introduced, together with its implementation in the ALICE environment, a brief overview of other analysis methods is also given. Chapter 4 is dedicated to the feasibility of the event plane analysis, considering the presence of nonflow effects as expected at LHC. Chapter 5 shows a complete analysis of simulated data with full detector reconstruction, and studies the possible sources of uncertainty. Finally, chapter 6 draws some conclu sions and gives an outlook about how to improve the measurement.
Chapter 1
Heavy Ion Collisions & Anisotropic Flow
Heavy ion collisions are meant to study the physics of nuclear matter under ex treme conditions of energy and density, to characterize the fundamental properties of strongly interacting fields.
The major issue of this thesis is the measurement of Elliptic Flow, an observable which provides a test of the initial Equation of State of the produced medium in a domain where perturbative QCD does not apply.
The first section of this chapter will present the general understanding of heavy ion collisions, from the experimental observables to their interpretations and the underlying theory (see sec.1.1). The following section (sec.1.2) will describe the initial condition of the system created in the collision and its description in terms of a Glauber model. Section 1.3 is dedicated to the medium properties, i.e. what has been observed so far by existing experiments and the description of anisotropic flow and in term of a Fourier decomposition. The observed scaling of v2 will be also discussed, and some extrapolations of v2 to LHC energies will be made. The last section (sec.1.4) will briefly introduce the concept of nonflow effects, postponing their detailed study to chapter 4 and 5.
1.1 A hot, dense, nearly perfect liquid The strong interaction between quarks is described by the Quantum Chromo Dy namics (QCD) in which the color degrees of freedom are introduced.
One of the characteristic features of QCD is that the coupling strength increases with the distance between the interacting quarks. In fact, the interaction becomes so strong that in ordinary matter quarks are permanently confined to colorless ha drons. At large momentum transfer, however, the running coupling constant αs(q2) decreases logarithmically, leading to a weak coupling of quarks and gluons called asymptotic freedom. In this regime perturbative QCD (pQCD) can be applied, lead
4 Heavy Ion Collisions & Anisotropic Flow
ing to (approximated) analytical solutions which have been widely tested in high energy physics experiments.
Over the last years, more and more attention has been devoted to the question about how a strongly interacting medium will respond to a dramatic increase of the energy density. Considerable progress has been made by numerically solving the QCD field equation on a spacetime lattice, the latticegauge calculations. These calculations, which have been refined in recent years [1–4], show a phase transition from ordinary matter to a new state where the color degrees of freedom are released. This new state is called Quark Gluon Plasma (QGP) and is expected to occur at a temperature of about 175 MeV and an energy density of 0.7 GeV/fm3 (fig.1.1(a)). Lattice QCD calculations also provide quantitative information about the pressure of the system around its phase transition to this color deconfined state (fig.1.1(b)).
0 2 4 6 8 10 12 14 16
100 200 30 0 40 0 500 60 0 T [MeV]
ε /T
4
ε SB /T 4
T c = (173 +/ 15) MeV ε c ~ 0.7 GeV/fm
3
3 flavor 2 flavor
0
1
2
3
4
5
100 20 0 300 40 0 500 60 0 T [MeV]
p/ T
4
p SB /T 4
3 flavor 2 flavor
Figure 1.1. (a) Lattice QCD results (for 2 and 3 quark flavors) for energy density and pressure as a function of the temperature around the QGP phase transition [2]. The rapid increase of the energy density around Tc indicates a rapid increase of the degrees of free dom in the system. (b) Pressure vs temperature from lattice calculations, showing that the pressure changes smoothly during the phase transition [4].
In the Big Bang theory of cosmology, the universe undergoes this phase tran sition at approximately 10 µsec after the Big Bang [5]. This phase transition is believed to be now accessible by laboratory experiments. By colliding atomic nu clei at extremely high energy, it is possible to achieve an energy density high enough for the QGP phase transition to take place.
The Relativistic Heavy Ion Collider (RHIC), which has been operational for the last 7 years at the Brookhaven National Laboratory, can collide gold nuclei up to 200 AGeV obtaining an energy density of 10 GeV/fm3 [6]. The energy density will be about one order of magnitude higher at the upcoming Large Hadron Collider (LHC) at CERN, where lead nuclei will collide at √sNN = 5.5 TeV.
Fig.1.2(a) shows schematically the evolution of the system after the collision. The system is created at t = 0, and after a preequilibrium stage (the detailed phy sics behind this stage is still unclear) the system enters the QGP phase, and keeps on expanding. When the system is cooled down to the chemical freezeout, the cons
1.1 A hot, dense, nearly perfect liquid 5
tituents hadronize into colorless hadrons, but they still elastically interact until the final decoupling and kinetic freezeout. Then the system is dilute enough to proceed its expansion as a free streaming of particles.
TargetPr oje
ctil e
Ti m
e
Space
Freezeout
Formation (preequilibrium)
Quark Gluon Plasma
0 1 2 3 40 0.2 0.4 0.6 0.8
1 1.2 1.4
ε (GeV/fm3)
p (G
eV /fm
3 ) EOS with phase transition Ideal quark gluon EOS Hadron gas EOS
Hot and dense phase(s)
Time
(a) (b)
Figure 1.2. (a) Schematic view of the collision in 2 dimensional spacetime. (b) Pressure versus energy density for an ideal gas, a hadron resonances gas, and a QGP with a phase transition.
The creation of the hot and dense phase by the RHIC experiments and the dis covery that this state seems to behave as a nearly perfect fluid is considered the major physics discovery of 2005 by the American Institute of Physics [5]. This discovery was based on the collective behavior of the produced medium, especially observed in its anisotropic flow. However, it is still heavily disputed [7, 8].
‘Anisotropic Flow’ is a phenomenological term used to describes the collective evolution of the system, observed as an overall pattern which correlates the mo menta of the final state particles. This pattern is believed to develop due to the initial asymmetry of the collision 1 and is conserved by the presence of multiple interactions between the system constituents before the kinetic freezeout, indicat ing that the system created in a heavy ion collision is definitely different from a superposition of protonproton collisions 2.
The underlying physics of anisotropic flow is usually described in terms of a pressure gradient, which is intimately related to the Equation of State of the system (see fig.1.2(b) where this relation is given for the EoS of an ideal gas, of a hadron resonance gas, and for an EoS with phase transition). For this reason the study of flow provides a sensitive tool to characterize the strongly interacting system created in the heavy ion collision.
Condensed matter experiments [10] also show that in a compressed gas of fer mions the pressure and energy density reach their maximum in the center of the the
1When the collision is not central and the interaction volume is shaped as an almond (see gig.1.3). 2Beside anisotropic flow, there are many other aspects which clearly distinguish AA from pp
collisions, from dN/dη to strangeness enhancement to J/Ψ suppression. See the reference [9] for a more comprehensive overview.
6 Heavy Ion Collisions & Anisotropic Flow
system and decrease toward the outside, until they reach a common value close to zero on the system boundary. The different size of the system with respect to the azimuthal coordinate causes the pressure gradient to be larger where the distance between the center and the boundary is shorter, and this azimuthal dependence of the pressure gradient drives the evolution of the system toward an anisotropic ex pansion.
Fig.1.3 gives a 3D representation of a noncentral collision. The reaction plane is defined by the beam direction and by the direction of the impact parameter 3 (the z and x axes respectively). The almond in the middle of the figure is the reaction volume where the participating nucleons take part to the interaction, the two half spheres represent the spectator nucleons, flying away from each other more or less along the beam direction.
x
z
y
Figure 1.3. Schematic 3D picture of a noncentral collision, showing the reaction plane, the almond shape of the interaction volume (participants) and the spectator nucleons flying away in opposite directions. The coordinate system of the event has the x axes oriented in the direction of the impact parameter, the z axis along the beam, and the y axis completes the cartesian system.
The observed flow mainly consist of a combination of two different patterns: a radial expansion (affecting the thermal spectra of final state particles) and a non isotropic one (affecting the spatial orientation of particles momenta). The latter arising from the initial spatial asymmetry of the reaction volume.
In noncentral collisions the azimuthal distribution of final state particles turns out to be highly anisotropic, therefore it is possible to determine an event plane Ψ with respect to which the angular distribution of particles momenta shows a strong cos (n [φ−Ψ]) dependence, called anisotropic flow.
3The impact parameter is the distance between the centers of the two colliding nuclei, usually called ~b.
1.1 A hot, dense, nearly perfect liquid 7
The standard way to characterize anisotropic flow uses a Fourier expansion of the Lorentz invariant distribution of the outgoing particles [11]:
E d3N
dp3 =
1
2π
d2N
pTdpTdy
( 1 +
+∞∑ n=1
vn(pT , y) cos [n(φ−ΨR)] ) , (1.1)
where φ is the azimuthal angle of each particle and ΨR is the reaction plane an gle, both measured in the laboratory frame. The first and second coefficient of the expansion, v1 and v2, are called directed and elliptic flow, respectively.
Elliptic flow at midrapidity (η ∼ 0) is particularly interesting because it reflects the asymmetry of the region where most of the new particles are produced.
In the current interpretation, flow originates from the rescattering between cons tituents and the initial spatial eccentricity of the overlap region. The number of interactions (and rescattering) is larger in more central collision while the spatial eccentricity is more pronounced in peripheral collisions. The interplay between these two ingredients dominates the trend of elliptic flow versus centrality.
The huge potential of the measurement of elliptic flow at RHIC (and the fact that this is really ‘first day physics’) leads to the development described in this thesis:
• development of the analysis based on the event plane method, which is a quite straightforward and versatile formalism (see section 3.2),
• implementation of the analysis code within the complex ALICE analysis framework (see section 3.3).
To show the limits of applicability of this approach, study has been done with Monte Carlo simulations for different particle multiplicities and different magni tudes of elliptic flow. To show how different models can be tested by the ALICE experiment, extrapolations of v2 up to LHC energies have been developed.
The experimental effort of many years on the determination of the elliptic flow is summarized in a "universal scaling" of v2 (shown in fig. 1.4), i.e. all existing results can be represented on the same axis [12–14]: v2 is divided by the initial eccentricity of the reaction volume ǫ (in order to distinguish dynamics from purely geometri cal effects) and the ratio v2/ǫ is plotted versus the rapidity density of the overlap region, defined as the charged particle multiplicity at midrapidity dNch
dy divided by
the transverse area of the overlap S (see section 1.2). What is surprising is that all experimental data show an almost linear scaling
behavior, suggesting a common driving force in the development of elliptic flow. A very recent work [15] suggests that either the QGP fraction of the system or the systems lifetime might drive this scaling. The plot also shows that only at the highest RHIC energy the data are compatible with ideal relativistic hydrodynamic calculations, which are believed to hold also at even higher energies (e.g. LHC).
The systematic uncertainties on fig.1.4 are under intense study nowadays, in cluding the uncertainty on the measured v2 that arise from the presence of non flow correlations [12] (azimuthal correlations not related to the reaction plane, see sec.1.4) and from the effects of flow fluctuations [16].
8 Heavy Ion Collisions & Anisotropic Flow
0 10 20 30 40 50 600 0.05
0.1 0.15
0.2 0.25
0.3 0.35 0.4
0.45
/dy ch 1/S dN
ε/ 2
v
HYDRO (EoS Q)←
HYDRO (EoS I)←
/A=11.8 GeV, Au+Au, E877labE /A=40 GeV, Pb+Pb NA49labE /A=158 GeV, Pb+Pb, NA49labE =200 GeV, Cu+CuNNs =62 GeV, Cu+CuNNs =200 GeV, Au+Au, STAR Prelim.NNs =130 GeV, Au+Au, STAR NNs =62 GeV, Au+Au, STAR Prelim.NNs
Figure 1.4. Elliptic Flow is divided by the eccentricity of the reaction volume to distinguish dynamics from purely geometrical effects, and plotted versus the entropy density of the overlap region [14] (the x and y axis have been enlarged to cover the LHC energy range). The hydrodynamic predictions for two different EoS are shown (for an ideal gas EoS I and for a QGP with phase transition EoS Q, see sec.1.3.2), and also a linear fit of the data (see sec.1.3.1).
LHC will provide data points up to a much higher energy, where also an increase in the multiplicity is expected, which will enhance the detectability of the elliptic flow. Moreover, at the higher initial energy density, the system will probably stay longer in the partonic stage, where all the elliptic flow will be generated.
Based on the extrapolation described in sec.1.3, the most central PbPb colli sion at 5.5 ATeV can be represented on the x axis of fig.1.4 at 1
S dNch dy
≃ 60 − 80, depending on the definition of the transverse area (see sec.1.2).
To extrapolate from the existing data to the magnitude of v2 to be expected at LHC energies, two ingredients are leading:
• the geometry of the initial system (eccentricity ǫ, transverse area S), calcula ted with a Glauber model of heavy ion collisions (see section 1.2),
• the EoS of the produced medium, which is needed to transform the initial spatial asymmetry of the system to the momentum anisotropy observed in the final state (see section 1.3).
Two models have been considered to describe the properties of the produced medium and to estimate the final state momentum anisotropy with respect to the initial eccentricity of the reaction volume:
The Microscopic Transport (cascade) Model [17] describes the time evolution of the hadronic/partonic phase by solving a transport equation derived from
1.2 Initial Conditions 9
kinetic theory. In this model, collectivity depends on the interaction cross section between the constituents and the main assumption is that the mean free path is comparable to the system size (λ≫ 0). Calculations are done in a perturbative way, giving first correction to collisionless limit (free streaming). This approach, also called Low Density Limit approximation, is described in sec.1.3.1.
The Relativistic Hydrodynamic Model [18] describes the evolution of the sy stem (before the kinetic freezeout) as the expansion of volume elements of a relativistic fluid, the main assumption is that the mean free path is much smaller than the system size (λ ∼ 0). This concept appeared the first time in 1953 in a paper by Landau [19]. The system is described in terms of (classic) macroscopic quantities, such as pressure and energy density, local thermal equilibrium is assumed (thermodynamic) and an Equation of State is required. The v2 coefficient in this approach comes out to be proportional to the speed of sound in the medium times the spatial eccentricity (see sec.1.3.2).
1.2 Initial Conditions The usual tool to describe the initial state of a heavy ion collision is a Glauber model [20]. For a given pair of colliding nuclei with atomic number A and B (usually called target and projectile), the Glauber model provides a way to calculate the number of nucleonnucleon interactions and the geometry of the overlap region as a function of the impact parameter b (see fig.1.5).
Glauber calculations can be either optical [20], where nucleon positions are ap proximated by a smooth distribution (the number of participants is proportional to the geometrical overlap of the two nuclear density functions), or Monte Carlo, where the nucleons are pointlike centers randomly distributed inside the nucleus and the probability of each interaction is calculated inside the overlap region pro portionally to the nucleonnucleon cross section [21]. The two approaches lead to similar results over a large range of impact parameters, being different only for the most central and most peripheral collisions [21]. For extremely peripheral collisions (b ≥ 2RA) the optical Glauber approach does not provide a good parametrization of the physics of the process, which is then dominated by the random occurrence of single nucleonnucleon interactions.
However, the study of fluctuations in a Glauber Monte Carlo approach was be yond the purpose of the present thesis (see sec.1.2.1). The extrapolations developed in the following sections are done using the optical Glauber approach.
The Glauber calculation starts with a parametrization of the spatial distribution of the colliding nuclei (defined as the probability to find a nucleon at the radius r), which is given by a WoodSaxon profile:
ρA(r) = ρ0
e(r−RA)/ξ + 1 , (1.2)
10 Heavy Ion Collisions & Anisotropic Flow
where RA is the radius of the nuclei with atomic mass A and atomic number Z (the same radius is taken for protons and neutrons), ξ is the nuclear surface diffuse ness, and ρ0 is a normalization factor. The distribution of protons and neutrons are normalized separately in such a way that
∫ ρp(r)d~r = Z and
∫ ρn(r)d~r = A− Z.
In the present calculations the colliding nuclei are 20882 Pb and the parameters of the nuclear density distribution (eq.1.2) have been taken from literature (nuclear data [22]): the radius is RA = 6.621± 0.02 fm, and the nuclear surface diffuseness ξ = 0.551± 0.01 fm.
The nuclear thickness function is defined as the optical path through the nucleus along the beam direction (z):
TA(x, y) =
∫ −∞ −∞
ρA(x, y, z)dz . (1.3)
The transverse coordinates for a Glauber calculation are shown in fig.1.5, the x is oriented in the direction of the impact parameter b and y is the direction perpen dicular to it.
b
A B
y
x
Figure 1.5. Coordinate system of a noncentral collision, used for the Glauber calculation. The impact parameter b is the distance between the centers of the two nuclei.
In noncentral collisions, the probability of each binary nucleonnucleon inter action in the transverse plane is given by the product of the thickness functions of the two nuclei A (transversally shifted by the impact parameter b) times the total inelastic nucleonnucleon cross section σNN :
PBC(x, y;b) = TA(x+ b/2, y)TB(x− b/2, y)σNN . (1.4)
The energy dependence of Glauber calculations is determined by the nucleon nucleon inelastic cross section σNN(
√ s), which is extrapolated from existing pp
1.2 Initial Conditions 11
and pp¯ data including the highest energy Tevatron pp collision (see the current Review of Particle Physics [23] or the PDG website [24]).
According to the value 4 used in the ALICE PPR [25] the nucleonnucleon in elastic cross section, for PbPb at a collision energy √sNN = 5.5 TeV, has been set to σNN = 60 mb.
However, the main ingredients of the extrapolations given in sec.1.3 are not very sensitive to the chosen value of the cross section (see below).
x (fm) 15 10 5 0 5 10 15
y (fm
)
15
10
5
0
5
10
15
0
0.5
1
1.5
2
2.5
3
3.5
0
0.5
1
1.5
2
2.5
3
3.5
WNN
x (fm) 15 10 5 0 5 10 15
y (fm
)
15
10
5
0
5
10
15
0
2
4
6
8
10
12
14
16
18
0
2
4
6
8
10
12
14
16
18 BCN
Figure 1.6. Transverse picture of the density distribution of wounded nucleons NWN and binary collision NBC (arbitrary scale) in the optical Glauber calculations, for an impact parameter b = 7 fm.
The total number of binary nucleonnucleon collisions is obtained integrating over the transverse plane (fig.1.6(b)):
NBC(b) = ∫
TA(x+ b/2, y)TB(x− b/2, y)σNNdxdy , (1.5)
where the x axis is oriented in the direction of the impact parameter b and y in the perpendicular one.
The number of ‘wounded nucleons’ is defined as the number of nucleons par ticipating to the production process with at least one collision, and is given by the integral [27] (fig.1.6(a)):
NWN(b) = ∫
TA(x+ b/2, y) ( 1−
( 1− σNNTB(x− b/2, y)
B
)B) +
+TB(x− b/2, y) ( 1−
( 1− σNNTA(x+ b/2, y)
A
)A) dxdy . (1.6)
4The value given in the ALICE PPR (pag.1583 of [25]) is σNN = 57 mb. Other references quote a higher cross section at the same collision energy [26], however, the actual value of the nucleon nucleon inelastic cross section at LHC remains an open issue
12 Heavy Ion Collisions & Anisotropic Flow
For the symmetry of the system (Pb+Pb), in our calculation TA = TB . The two panels of fig.1.6 show the density distributions of wounded nucleons
and binary collisions. Depending on the choice of one or the other, the impact parameter dependence of the the geometrical quantities (such as spatial eccentricity and transverse area) changes significantly (see fig.1.8).
Figure 1.7 shows the impact parameter dependence of the number of binary collision (NBC) and the number of participants to the reaction (wounded nucleons NWN ). As we see, only the number of binary collisions strongly depends on the choice of the nucleonnucleon cross section, while the number of participant is af fected at a level of 1%.
The impact parameter range has been limited to 0 < b < 15 fm (see fig.1.7), where the upper limit is consistent with almost no interactions 〈NBC〉 ≃ 0.
b (fm) 0 2 4 6 8 10 12 14
W N
N
0
100
200
300
400
b (fm) 0 2 4 6 8 10 12 14
B C
N
0
500
1000
1500
2000
2500
Figure 1.7. Impact parameter dependence of the number of wounded nucleons (left) and the number of binary collision (right), calculated with respect to the impact parameter b from 0 to 15 fm. The continuous band represent the 3σ uncertainty on the nuclear radius and width (see above), while the dashed band represent the uncertainty due to the particular choice of the cross section (the upper and lower lines are produced with σNN = 90 and 40 mb respectively).
The Glauber model also provides the geometry of the overlap region, parame trized by the spatial eccentricity and the transverse area of the overlap (both used in fig.1.4). There are different ways to define these geometrical quantities, depending on the chosen distribution (weighting function) used to compute the averages over the x and y coordinates, e.g. geometric overlap, wounded nucleons or binary col lisions (the reference [28] gives a few examples of the procedure). Another option attempted in more recent developments makes use of the Color Glass Condensate (CGC) initial conditions, which leads to larger eccentricities and therefore higher flow values [29, 30].
The spatial eccentricity ǫ is defined in terms of the RMS of the distribution projected on the x and y axes (σx = 〈x2〉 − 〈x〉2, σy = 〈y2〉 − 〈y〉2):
ǫ ≡ σ 2 y − σ2x σ2x + σ
2 y
. (1.7)
1.2 Initial Conditions 13
The transverse area of the overlap S (also used in fig.1.4) is defined as: S ≡ πσxσy, (1.8)
where σx is the variance along the direction of the impact parameter b, and σy the variance on the perpendicular to it.
b (fm) 0 2 4 6 8 10 12 14
ε
0
0.1
0.2
0.3
0.4
b (fm) 0 2 4 6 8 10 12 14
St
0
5
10
15
20
25
30
Figure 1.8. Impact parameter dependence of eccentricity ǫ (left) and transverse area S of the overlap region (right), for both the density of wounded nucleons (+) and binary collisions (×). The plots are produced with the same Glauber calculation of fig.1.7 (i.e. 30 steps in impact parameter from 0 to 15 fm).
Fig.1.8 shows the impact parameter dependence of the collision geometry (ǫ and S) obtained from the optical Glauber calculation, of the distribution of wounded nucleons and of binary collisions (the integral along the transverse plane of eq.1.6 and eq.1.5 respectively). Both the distributions have a physical meaning, and the choice of one or the other will be discussed in sec.1.3.
The initial energy density in the transverse plane depends only on the thickness functions TA and TB (eq.1.3) and is defined as:
E(x, y) = f(TA(x, y), TB(x, y)) , (1.9) where f is a function that depends on the initial assumptions (different approaches can be found in literature [31]).
Early thermalization is assumed, giving all the available energy thermalized in a Lorentzcontracted volume [32].
1.2.1 Eccentricity in Glauber MC Recent developments, mainly due to the fluctuations observed in v2 [33], suggest 5 a different definition of eccentricity: the eccentricity of the participants ǫpart, in which the fluctuations in the position of the participants (wounded nucleons) is explicitly taken into account [34].
5The obvious assumption is that elliptic flow follows the initial eccentricity of the system.
14 Heavy Ion Collisions & Anisotropic Flow
The total number of collisions does not just depend on the geometrical overlap of the two nuclei, but has a probability proportional to σNN . Due to this, the spatial distribution of the nucleons that are actually participating to the reaction may have a slightly different shape than the geometrical overlap. The effect is much more pronounced in peripheral collisions, where the overlap region (and its thickness) is small and the randomness of binary processes dominates.
Therefore, the ellipse created by participating nucleons may be rotated with respect to the geometrical overlap, so that the minor axis is not oriented along the impact parameter vector ~b (see fig.1.9).
y
x
x’
y’
Figure 1.9. Schematic view of a collision of two identical nuclei in the transverse plane. The x and y axes are drawn in the standard way, with x oriented in the direction of the impact parameter b. The circles indicate the positions of wounded nucleons (participants). Due to fluctuations, the interaction region is shifted and tilted with respect to the standard (x, y) frame, leading to a spatial distribution which is better approximated by an ellipse along the x′ and y′ axes.
The eccentricity of the participants ǫpart can be defined with respect to the stan dard x and y axes (or any other cartesian system in the transverse plane) as:
ǫpart ≡
√ (σ2y − σ2x)2 + 4(σ2xy)2
σ2x + σ 2 y
, (1.10)
where σxy = 〈xy〉 − 〈x〉 〈y〉. Note that this expression is reduced to eq.1.7 if the elliptic distribution of the participants has the same direction as the geometrical overlap. Eccentricity fluctuations should also be taken into account, especially in very peripheral events [16, 33, 35]. However, including these effect would have required a Glauber Monte Carlo approach and the development of software tools
1.3 Medium Properties 15
that were not yet available for ALICE. Therefore, in the following, ǫ always refers to the geometrical eccentricity as defined by eq.1.7.
1.3 Medium Properties In the final state particle spectra, both a thermal and an anisotropic collective com ponent can be observed. The first is due to the thermal motion of the particles in the hot dense system created in the collision, the second is a radial boost due to the asymmetry of the system.
A thermalized medium
The thermal motion of the system’s constituents is observed in the transverse mo mentum spectra of the final state particles. The low pT component of the observed dN/dpT distribution approximately follows a Boltzmann blackbody spectrum [8]:
dN
dpT
∣∣∣∣ y∼0
∝ 1 (e
pT−µB Tapp ± 1)
, (1.11)
where µB is the baryon chemical potential, a parameter which accounts for the energy needed to produce the hadrons.
The radial boost, due to the expansion of the system (responsible for the blue shift in the final spectra), can be incorporated into the phenomenological parameter apparent temperature (Tapp [36]), which is expressed in terms of the transverse flow velocity vT [37]:
Tapp = Tf.o. + 1
2 m 〈vT 〉2 . (1.12)
The freezeout temperature (Tf.o.) quantifies the thermal motion of the cons tituents just before the kinetic freezeout, when the system decouples and all the particles propagate as free streaming. The transverse flow contributes to the tem perature proportionally to the mass of the particle (heavier particles moving at a fixed velocity carry a higher momenta), therefore a multiple fit of identified particle spectra allows to disentangle the two components [38, 39].
However, the distribution of eq.1.11 does not properly reproduce the long tail at high pT observed in experimental data, which is dominated by nonthermal pro cesses (hard scattering, recombination [40]). To better reproduce the data, the input distribution used in the GeVSim simulations presented in chapter 5 is a phenomeno logical functional form inspired by the Levy distribution [41] (see sec.5.3 for the details).
Particle ratios
Assuming that the hadronization process occurs in an equilibrated system composed of noninteracting hadron resonances, hadron yields can be described by a thermal
16 Heavy Ion Collisions & Anisotropic Flow
distribution calculated in a grand canonical ensemble [42]. The relative abundances of hadron species are interpreted in term of statistical hadronization [43, 44]:
ni = g
2π2
∫ ∞ 0
p2
e(Ei(p)−µi)/Tch ± 1dp , (1.13)
whereEi = √ p2i +m
2 i , and µi is the chemical potential for the creation of a particle
i = π,K, .... Eq.1.13 gives the yields at the freezeout time, shortlived particles and resonances need to be taken into account separately to correctly reproduce the particle ratios observed in the final state.
The success of the above distribution in describing RHIC data supports the as sumption that the system is in local thermal equilibrium when the hadronization process takes place, at the chemical freezeout. The chemical freezeout represents the end of inelastic processes changing the chemical composition of the system and it occurs at an earlier time than the kinetic freezeout, which is driven by elastic processes (and thus at a hotter temperature Tch ≃ 177 MeV [45]).
The observed thermal dN/dpT spectra and the success of statistical hadroni zation in describing the observed particle yields support the assumption that the system is in (local) thermal equilibrium, and if the system is in local equilibrium then it could show hydrodynamic behavior [8].
In noncentral collision the dN/dφ distribution is azimuthally anisotropic (see also sec.1.1). This is a phenomenon that has been observed in heavy ion experi ments over a wide range of energy, an event plane can be determined on an event basis defining the favorite direction of radiated particles.
The theoretical efforts in interpreting the data collected at RHIC contributed to a robust description of the system in term of relativistic hydrodynamic [18]. However questions are still open, especially with respect to the initial conditions.
Other descriptions are the low density limit approximation (LDL) [17], and numerical implementations of RQMD (Relativistic Quantum Molecular Dynamic) [46–48]. These theoretical models to describe flow also provide the tools to make extrapolations to LHC energies. Extrapolations based on LDL and relativistic hy drodynamic will be described in the following subsections (see sec.1.3.1 and 1.3.2). The RQMD model has not been considered in the present thesis because it already produces too little flow with respect to RHIC data.
The charged particle multiplicity is calculated from the number of wounded nu cleons using a saturation model for particle production (see sec.1.3.3 for the details).
1.3.1 Low Density Limit Figure 1.4 shows the linear increase of v2/ǫ with the multiplicity (entropy) density ( 1 S dN dy
). A simple extrapolation to LHC can be done by performing a linear fit on the existing data in a range where they appear to be linear (i.e. from 1
S dN dy & 5). The
1.3 Medium Properties 17
fit is justified by the LDL model (see eq.1.14), but is extended much above the ‘low density’ domain.
The Low Density Limit is a perturbative approximation which describes the first correction to free streaming [36] [49]. It is valid when the particle mean free paths (λi ≃ 1/(σρ), where σi is the cross section of particle i = π,K, ... and ρ is the particle density) are larger than the transverse dimensions of the overlap zone.
Under this assumption, particles can escape from the collision zone almost with out interacting, and the system behavior is close to free streaming (collisionless limit). The first order correction to free streaming is calculated from particle colli sions. Particles initially are produced azimuthally symmetric in momentum space but not in coordinate space, and the interactions with comovers produce an azimu thally asymmetric momentum distribution because of the (azimuthal) spatial asym metry of the source.
The starting point is the initial condition at formation time. Subsequent scatte ring between comovers are described by inserting a collision term into to the free streaming distribution function. The first order correction is calculated as the devi ation from cylindrical symmetry, which directly leads to the magnitude of elliptic flow v2 for the particle species i = π,K, p, ... (see the reference [36]):
vi2 = ǫ
16πσxσy
∑ j
⟨ vijσijtr
⟩ dNj dy
v2i⊥
v2i⊥ + ⟨ v2j⊥ ⟩ , (1.14)
where vi is the velocity of the particle, vj of the scatterer (what is used is the trans verse velocity v⊥ w.r.t. the reaction plane), and vij is their relative velocity. The averages 〈..〉 are taken over the scattered momenta pj . Since it is the momentum transfered in the collisions that deforms the momentum distribution, σijtr is the mo mentum transport cross section (i.e. the cross section averaged over energy and scattering angle [36]).
From eq.1.14 the elliptic flow is proportional to the eccentricity of the overlap region ǫ, and it vanishes for an azimuthally symmetric source (central collisions).
The integrated value of v2 of this linear extrapolation is calculated as:
v2 ≃ ALDL ǫ S
dN
dy +BLDL . (1.15)
with ǫ and S given by the density of nucleons participating to the reaction in a Glauber calculation (eq.1.7 and 1.8 respectively).
The coefficient ALDL = 0.00614 ± 0.0001 and the constant BLDL = 0.051 ± 0.002 are obtained from a linear fit of the highest energy RHIC data 6 (see fig.1.4). The fit has been restricted to only one set of data points because the scaling is not perfect (see the discussion in sec.1.1).
6The fit only includes data points from AuAu collisions at √sNN = 200 GeV (i.e. Npts = 9, χ2/DoF ≃ 9).
18 Heavy Ion Collisions & Anisotropic Flow
1.3.2 Relativistic Hydrodynamics Relativistic hydrodynamic is a classical calculation, which describes the system in terms of volume elements of a relativistic fluid [18, 31]. Each ‘fluid cell’ x is characterized by its energy momentum tensor:
T µν = (E(x) + p(x))uµ(x)uν(x)− p(x)gµν , (1.16) where p and E are the pressure and the energy density of the fluid cell, and uµ = γ(1, vx, vy, vz) is the flow velocity.
The evolution is ruled by conservation laws. Local conservation of energy and momentum are expressed by the equations:
∂µT µν = 0 , (ν = 0, 1, 2, 3). (1.17)
Since the fluid is made of quanta, it carries few conserved charges Ni (such as electric charge, baryon number, strangeness, etc.), with charge density ni(x) (i = 1, ..,M ) corresponding to charge currents densities jµi (x) = ni(x)uµ(x). Charge conservation is expressed by the equations:
∂µj µ i = 0 , (i = 1, ..,M). (1.18)
Hydrodynamics implies the concept of thermodynamics, in particular an equa tion of state (EoS) of the system is needed to close the system of differential equa tions. The above picture provides a set of 4 +M differential equations, involving 5 +M undetermined fields:
• 3 independent components of the flow velocity uµ(x), • the energy density E(x), • the pressure p(x), • and the M conserved charge densities.
This set of equations is closed by an equation of state which relates the local ther modynamic quantities p and E (see fig.1.2(b)).
The EoS of strongly interacting particles can, in principle, be calculated by latticeQCD (see fig.1.1). However those calculations are technically difficult and still lead to large uncertainties [4]. An alternative is to model the system of nuclear matter as a noninteracting gas of hadronic resonances [50].
If the relaxation rate is not fast enough to ensure an almost instantaneous ther malization, the energy momentum tensor and charge current densities must be ge neralized including dissipative effects (e.g. shear viscosity [51]). The goal of this approach is to provide a more accurate description of heavyion collisions by taking into account the deviation from an ideal fluid. First order viscous corrections have been derived [52,53], however the actual value of the viscosity in a hot QGP is still
1.3 Medium Properties 19
x (fm)
y (fm
)
5 0 5 5 0 5 5 0 5 5 0 5
2 fm/c 4 fm/c 6 fm/c 8 fm/c Time
b
z
5 0 5
5
0
5 b = 7 fm
0 fm/c
Figure 1.10. Time evolution of the transverse energy density profile from hydrodynamic calculations [18]. As the system expands anisotropically, the initial eccentricity vanishes.
controversial. A universal lower bound on the viscosity to entropy ratio has been proposed in connection with blackhole physics: η/s > ~/4π [54], while a recent study of elliptic flow at RHIC suggests that the magnitude of viscous correction is significantly higher than this lower bound η/s ≃ 0.11 to 0.19 ~ [55].
The ratio between elliptic flow and the spatial eccentricity of the overlap pa rametrizes the speed at which a perturbation is propagated through the system. In the hydrodynamic picture, this ratio is proportional to the square of the velocity of sound in the medium: v2/ǫ ∝ c2s. The velocity of sound is defined as: c2s ≡ dPdE . Different equation of states lead to different relations between the pressure and the energy density (see fig.1.1), and therefore to different values of cs [56].
The spatial anisotropy appears in the early stage of the collision and it is self quenching (see fig.1.10), however the elliptic flow v2 is conserved during the whole evolution of the system, and therefore carries information on the initial condition [18].
A simple extrapolation can be made by assuming the ratio v2/ǫ to be constant with respect to the centrality, which is approximately true up to very peripheral collisions [56].
A lower limit on v2 is given by the equation of state of a quark gluon plasma which undergoes a soft transition to the hadronic phase (EOS Q). The value of c2s has been chosen at the limit of nonrelativistic regime cs =
√ 0.22 [57]. Ac
cording to the initial condition used in [57], the eccentricity is calculated from the entropy density distribution which is proportional to the density of wounded nu cleons. Therefore the values of v2 versus centrality are obtained scaling c2s by the eccentricity of the wounded nucleon distribution (v2(b) = 0.22× ǫWN(b)).
For the upper limit, the equation of state of an ideal gas of massless fermions has been chosen (EOS I), giving P = E/3 (and cs =
√ 1/3) [31, 56]. In this
case, the eccentricity has been calculated from the density of binary collisions ǫBC (proportional to the initial energy density [18]), which has on average the same magnitude of ǫWN(b) but a slightly different centrality dependence (see fig.1.8(a)). Elliptic flow versus centrality is obtained as v2(b) = 13 × ǫBC(b).
20 Heavy Ion Collisions & Anisotropic Flow
1.3.3 Charged Multiplicity In order to estimate the centrality dependence of elliptic flow in PbPb at
√ s = 5.5
ATeV, the charged multiplicity at midrapidity (dNch/dyy∼0) must be also extra polated.
The final particle multiplicity is calculated from the number of participants to the reaction (wounded nucleons) [58], which dominates the ‘soft’ component 7 of the final spectra [59].
The model chosen in the present thesis and is a saturation model for particle production in the soft pT region, extrapolated from leptonproton collisions (see the reference [58]).
WNN 0 50 100 150 200 250 300 350 400
/2 W
N dN
/N
0
2
4
6
8
10
b (fm) 0 2 4 6 8 10 12 14
/d y
ch dN
0
500
1000
1500
2000
Figure 1.11. (a) Number of produced particle per wounded nucleon with respect to the number of wounded nucleons NWN . The LHC prediction (upper band) is calculated from eq.1.19, for comparison, the fit of three different sets of RHIC data is also shown (√sNN = 19.6, 130 and 200 GeV [58]). (b) Charged multiplicity per unit rapidity with respect to the impact parameter at LHC (PbPb at √sNN = 5.5 TeV). Values are calculated using the number of wounded nucleons NWN , obtained from the Glauber calculations (eq.1.6), into equation 1.19.
The main assumption of this approach is the geometric scaling of hadrons pro duced at small xBj observed in leptonproton data at HERA. Over a wide range of Bjorken x and Q2, the x dependence can be expressed by the saturation momentum Q2sat(x), so that the data are described in terms of a single variable Q2/Q2sat(x). By adding a nuclear dependence in the definition of the saturation momentum, Q2sat,A ∝ AαQ2sat, the model perfectly works in fitting RHIC and SPS data at diffe rent beam energies, and can be easily extrapolated to LHC [58].
The multiplicity of newly produced (charged) particles per participant, with re spect to the collision energy√sNN , incorporates the Q2sat dependence in the Golec
7The term soft refers to the low pT part of the spectra (pT . 1 GeV/c), in contrast with the hard component, which refers to hard scattering processes leading to jets and high pT observables. In heavy ion collisions, the soft component mainly consist of thermalized particles, where the therma lization is a consequence of the multiple scattering in the medium.
1.3 Medium Properties 21
Biernat and Wusthoff (GBW) parameter λ [60]: 1
Npart
dNAAch dη
∣∣∣∣ η∼0
= N0 √ sλN
1−δ 3δ
WN , (1.19)
where δ = 0.79 ± 0.02 is the fit parameter, and N0 = 47/2 is the overall normal ization [58]. The GBW parameter (for R20 = 1/Q2sat = (x¯/x0)λ in GeV−2 and x0 = 3.04 · 10−4) is λ = 0.288 [60].
Combining eq.1.19 with the number of participants from the above Glauber cal culations, it is possible to estimate the impact parameter dependence of the charged multiplicity at midrapidity at LHC (see fig.1.11(b)).
η/dchdN 0 200 400 600 800 1000 12001400 1600 18002000
2 v
0
0.02
0.04 0.06
0.08 0.1
0.12 0.14
hydro2 LDL hydro
Figure 1.12. Integrated elliptic flow 〈v2〉 versus charged multiplicity at midpseudorapidity (∼ event centrality), extrapolated to LHC with the LDL approximation (the most symmetric band) and with the hydrodynamic parametrization, with c2s = 0.33 and 0.22 (the upper and lower band respectively). The uncertainties are calculated by propagating the 3σR + 3σw uncertainty (on radius and width) from the nuclear data [22] to the calculated eccentricity and dNch/dy. The uncertainty of the LDL extrapolation also includes the errors on the linear fit (see fig.1.4).
Figure 1.12 shows the centrality dependence of the integrated elliptic flow as a function of the charged multiplicity at midrapidity for the three extrapolations presented above: the more symmetric curve represent the linear extrapolation of the data in fig.1.4 (see sec.1.3.1), the other two curves are the upper and lower limit in the relativistic hydrodynamic approach, with c2s = 0.33 and 0.22 respectively (see sec.1.3.2).
The centrality classes and the exact values which have been used for the simu lations are listed in tab.5.1 in the analysis chapter.
22 Heavy Ion Collisions & Anisotropic Flow
More recent developments suggest a slightly different extrapolation of v2 as a function of centrality, which better describes RHIC data [14]. The extrapolation is still based on relativistic hydrodynamic, but it includes viscous deviations [61]. However, for time reason, the model has not been used in the present thesis.
1.3.4 Differential Flow In the hydrodynamic picture, a detailed comparison between different equations of state is achieved by looking at v2 versus the transverse momentum, for different particle species, in the lowpT region. In particular, the effect of a phase tran sition would be less pronounced for lighter particles such as pions compared to protons [62]: at the same collective flow velocity, heavier particles carry a higher momentum and therefore are less affected by the thermal motion (see fig.1.13).
Figure 1.13. Transverse momentum dependence of v2 for protons and pions [63]. The lines represent hydrodynamic calculations, assuming an EoS with (full) and without (dashed) phase transition.
Elliptic flow studies at RHIC show that by scaling both v2 and pT by the number of constituent quarks nq a universal curve is observed [45, 64], suggesting that the partons are the relevant degrees of freedom at least during the earliest stage of the system evolution when most of the elliptic flow is built up. The results for the most common mesons and baryons are shown in fig.1.14.
In the simulations performed for the present thesis, the differential shape of v2 versus pT has been parametrized as linearly increasing with pT up to its saturation value at pT = 2 GeV/c, after which v2(pT ) becomes flat [65]. The magnitude of the saturation v2 has been determined for each centrality class in such a way that the integrated 〈v2〉 (over the dN/dpT spectra of charged hadrons) are the extrapolated values shown in fig.1.12 (see sec.5.3 for the details).
Elliptic flow versus (pseudo)rapidity is assumed to be flat, in agreement with what is normally used in hydrodynamic calculations (v2 shows a plateau at y ∼ η ∼
1.4 NonFlow correlations 23
0 0.5 1 1.5 2 2.5
0
0.02
0.04
0.06
0.08
0.1
qT (GeV/c)/np
q /n
2v Λ and s
0 Fit to K
s 0
K
pi + pi− + Λ+Λ
p + p +Ξ+ −
Ξ
+Ω+ −
Ω
Figure 1.14. Elliptic flow v2 for identified particles scaled by the number of constituents quarks nq, plotted versus pT /nq [45].
0 in the interval η . 1 [66, 67]) and with existing RHIC data [12, 68]. On the experimental side this means that elliptic flow at η ∼ 0 is estimated by averaging the reconstructed v2(η) over a wide pseudorapidity interval, which in our case could be the entire acceptance of the ALICE central barrel detector (−0.9 < η < 0.9, see sec.2.1).
Since the aim of the present analysis is the measurement of elliptic flow of unidentified charged particles, no particle type dependence of v2 has been developed in the simulations.
1.4 NonFlow correlations The elliptic flow observed in the final state arises from the anisotropic expansion of the system, which is due to the initial azimuthal asymmetry of the collision along the direction defined by the reaction plane. Therefore the coefficient v2 quantifies the correlation between the directions of radiated particles and the orientation of the reaction plane.
However the reaction plane is not directly observable in a real experiment (the experimental methods to estimate its direction and the magnitude of elliptic flow will be described in chapter 3), what is measurable experimentally is the ‘event plane’, which is reconstructed from the azimuthally anisotropic particle distribution (see sec.3.2).
The reconstructed event plane approximates the real reaction plane just in case flow is the only source of azimuthal correlation. However, in real experiments, other physics phenomena can affect the spatial distribution of particles trajectories. Due to jet emission, resonance decays and momentum conservation, particles are mutu ally correlated with no respect to the orientation of the reaction plane. These effects
24 Heavy Ion Collisions & Anisotropic Flow
are summarized under the concept of ‘nonflow’, defined as azimuthal correlation between ktuples (i.e. pair, triplets, ...) of radiated particles.
Depending on the analysis method, nonflow effects introduce a systematic error in the flow measurement, and nonflow contributions at LHC energies represent a big uncertainty in the flow analysis at ALICE.
In the present thesis, nonflow effects have been simulated using Hijing (see sec.2.2.3), and part of the study has been devoted to characterize their source, and to compare their magnitude to the expected flow signal (defining, in such a way, the applicability limits of the event plane analysis). The details of this study and the analysis results are given in sec.4.1.
Chapter 2
Experimental Setup and Analysis Framework
ALICE (A Large Ion Collider Experiment [69]) is an experiment dedicated to study heavy ion collisions at the LHC (Large Hadron Collider [70]), located at CERN.
One of the main physics goals of ALICE (and the major issue of this thesis) is the measurement of ‘anisotropic flow’ in PbPb collisions (and in particular elliptic flow, see section 1.1). Flow is a collective phenomenon classified as ‘soft physics’, since its observation requires the ability to reconstruct and identify particles down to very low momentum. Besides soft physics the ALICE program will cover many other physics observables occurring in heavy ion collisions, e.g. jets, heavy quarks, direct photons, HBT interferometry, etc. This lead to the construction of a multi purpose detector combining different detection techniques.
In the first part of this chapter, the ALICE detector will be described, devoting more attention to the subdetectors directly involved in the flow measurement (see section 2.1). Since the LHC is not yet operational during the development of the present thesis, the analysis presented in the following chapters is entirely based on simulations. Therefore, the second part of this chapter will describe the simulation and analysis framework that has been use (section 2.2). The last section of this chapter describes the procedure for track reconstruction and particle identification implemented in the ALICE software framework, a prototype of the one that will be used during the real experiment (see section 2.3). The final output of the reconstruc tion algorithm is a data structure (the ALICE Event Summary Data) that constitutes the starting point of the flow analysis, as will be described in chapter 3.
2.1 The ALICE detector at LHC The heavy ion program at LHC, which is supposed to start after the first pp run, will collide the largest available nuclei at the highest possible energy (PbPb collision at√ sNN ≃ 5.5 TeV), and also explore different systems (pA, AA) at different beam
26 Experimental Setup and Analysis Framework
energies. The nominal luminosity of the LHC for PbPb collisions is 1 inverse millibarn
per second, i.e. an event rate of about 8000 minimum bias collisions per second. On average 5% of them will correspond to the most central events with a multiplicity of about 2000 charged particles per unit rapidity (see sec.1.3).
Figure 2.1. General layout of the ALICE detector [71]. For visibility, the HMPID detector is at 12 o’clock position instead of 2 o’clock position where it will actually be. For the meaning of the abbreviations refer to the text.
This low interaction rate together with the high multiplicity environment lead to the design of slow but highly granular tracking detectors. The soft physics domain requires a wide acceptance tracking device with low material density, immersed in a moderate magnetic field. In addition, particle identification over a wide momentum range is required, which implies the implementation of many different identification techniques (energy loss, time of flight, transition radiation, and Cherenkov light).
ALICE is a general purpose detector to measure and identify hadron, lepton and photons produced in the interaction, from very low to very high transverse mo mentum (100MeV/c < pt < 100GeV/c). It consists of a central detector system, designed to provide full tracking at midrapidity (−0.9 < η < 0.9) over the full azimuth, and several forward detectors.
The experimental setup of ALICE is extensively described in the ALICE Tech nical Proposal [72] and its addenda [73,74] and in the ALICE Physics Performance Report [71]. The detector systems are described in the various Technical Design Re
2.1 The ALICE detector at LHC 27
port (TDR [75–87]). The Trigger System is described in [88], the Data Acquisition System is described in the ALICEDAQ manual [89].
Tracking and particle identification in the central rapidity region relies on four separate layers of 2π coverage detectors (ITS, TPC, TRD and TOF) immersed in a uniform magnetic field of parallel to the beam axis. The ALICE experiment is designed to run with three possible configurations of the magnetic field, ~B = 0.2, 0.4 and 0.5 Tesla (value of ~B at the center of the ALICE solenoid). The magnitude of the magnetic field affects the transverse momentum acceptance, a stronger mag netic field gives a better resolution at high pT but worsens the efficiency at low pT . The current default value of the magnetic field is ~B = 0.4 T 1.
The detector arrangement (and in particular, the Inner Tracking System, see sec.2.1.1) provides high granularity close to the interaction point to reconstruct shortlived resonances, B and D mesons. The magnetic field is generated by the large solenoidal L3 magnet which contains the experiment (see fig.2.1).
The central system is complemented by a high momentum particle identification detector (HMPID [80]) which is a high resolution array of ringimage Cherenkov detectors (located at η < 0.6 with an acceptance of 57.6◦ degrees in azimuth). Pho tons are reconstructed in a high density crystal photon spectrometer (PHOS [81]) which covers a small η slice (η < 0.12) at midrapidity and 100◦ in φ. A fu ture upgrade of the experiment foresees an electromagnetic calorimeter (EMCAL) to be installed over 100◦ azimuthal degrees in the central rapidity region, to help identification of charged leptons and photons.
Muon detection is performed by a forward spectrometer, which covers a high pseudorapidity cone (−4.0 < η < −2.4) on the negative z side 2 of the central detector (MUON [83]). The muon spectrometer is equipped with an absorber for filtering out hadrons and photons from the interaction, a dipole magnet and two separate arrays of tracking chambers, before and after the dipole magnet, for muon momentum measurement.
To complement the central detection system, other detectors are used to char acterize the centrality of the events: a silicon strip forward multiplicity detectors (FMD [87]) for measuring charged particle multiplicity, and a preshower photon multiplicity detector (PMD [85]) for measuring the multiplicity and spatial distri bution of photons on an eventbyevent basis. They are located at the two opposite sides of the interaction point.
The fast trigger signal is provided by an array of scintillators and quartz counters close to the interaction point: the V0 and T0 detectors [88]. The T0 detector, with two arrays of Cherenkov counters placed on both sides of the interaction point, is particularly important because due to its fast response it is used to start the other detectors.
1This is the default setting of the release v404Rev14 of AliRoot (the one in use for the PDC06 production, including the simulations presented in chapter 5).
2In the laboratory frame, the z axis is defined by the direction of the beam beam pipe (see also sec.2.3).
28 Experimental Setup and Analysis Framework
About 100 meters away from the collision point, a ZeroDegree Calorimeter (ZDC [82]) uses both hadronic and electromagnetic shower to measure the energy carried away by non interacting nucleons (spectators 3). The ZDC consists of two distinct quartz fiber calorimeters, one for spectator neutrons, placed at zero degrees relative to the z axis, and one for spectator protons, placed externally to the beam pipe on the side where positive particles are deflected. In an ideal case, dividing the collected energy by the average energy per nucleon at LHC (i.e. 2.76 TeV/nucleon in a 208Pb beam), it would be possible to immediately estimate the centrality of the collision. In the real experiment not all the spectator nucleons can be detected.
The elliptic flow measurement described in this thesis requires full tracking over 2π of azimuthal coverage, which is the domain of the central barrel detector system. Here in the following, the four main components of the central system will be briefly described, their combined track reconstruction will be presented in section 2.3.
2.1.1 ITS From the interaction point, the first component of the ALICE detector is the Inner Tracking System (ITS [75]), six concentric layers of silicon detectors with a design based on three different Si techniques (fig.2.1.1).
Figure 2.2. Layout of the ITS detectors, showing the spatial arrangement of the three layer.
The position and segmentation are optimized for efficient track finding and for a high spatial resolution in a high multiplicity environment.
The high particle density (80 particles/cm2 at 4 cm from the interaction point) and the requirements on spatial resolution are the main reasons for choosing a Sili con Pixel Detector (SPD) for the innermost two layers. The following two layers are
3From the number of spectators Nspec, the number of participating nucleons can be calculated as Npart = A−Nspec, where A is the atomic number of the colliding nucleus.
2.1 The ALICE detector at LHC 29
a Silicon Drift Detector (SDD), and where the track densities becomes lower than one particle per cm2 (> 40 cm from the interaction point) there are two layers of doublesided Silicon Strip Detector (SSD). Both the SDD and the SSD layers have an analog readout for dE/dx measurement, which allows lowpT particle identifi cation using the BetheBloch model for energy loss.
Table 2.1. Essential details of the ITS detectors.
Detector Layer r (cm) ±z (cm) η σrφ (µm) σz (µm) Channels SPD 1 4.0 14.1 1.98 12 100 3.278.400
2 7.2 14.1 0.9 6.556.800 SDD 3 15.0 22.2 0.9 38 28 43.008
4 23.9 29.7 90.112 SSD 5 38.5 43.2 0.9 20 830 1.148.928
6 43.6 48.9 1.459.200
The ITS has a pseudorapidity acceptance of η < 0.9 for all vertices located within ±5.3 cm from the beam intersection. The first layer of the SPD has a larger pseudorapidity coverage (η < 1.98), so that this part together with the Forward Multiplicity Detectors (FMD) 4 provide a continuous coverage in rapidity for the measurement of chargedparticles multiplicity. Informations about the ITS is sum marized in tab.2.1.
The material budget of the ITS has been kept as low as possible (X/X0 ≃ 7% for perpendicular tracks) in order to maximize the efficiency at low momentum, a thick layer at very close distance from the interaction point would act as a shield, preventing tracks from entering the TPC.
2.1.2 TPC The ITS is surrounded by the large cylindrical volume of the TimeProjection Cham ber (TPC [76]), a conventional device in heavy ions experiments, successfully used already by NA49 and STAR.
The field cage has a total volume of about 88 m3 (making it the largest Time Projection Chamber ever built), and the detector is optimized for an extremely high multiplicity environment, safely overestimated as about 8.000 tracks per unit rapi dity (∼ 20.000 tracks in the whole TPC coverage [71]). This can be achieved at the rate of 400 minimum bias PbPb collisions per second (400 Hz), and up to 1 kHz for pp [71].
The TPC is the main tracking device in ALICE, track seeds start in the outer radius of the TPC (see sec.2.3). It has 2π azimuthal coverage and an acceptance
4The Forward Multiplicity Detectors measures chargedparticles multiplicity in the pseudorapi dity range −3.4 < η < −1.7 and 1.7 < η < 5.1.
30 Experimental Setup and Analysis Framework
E
E
88 µs
510 cm
Figure 2.3. Layout of the TPC, showing the orientation of the electric field toward the central membrane (e− drift to the endcaps).
η < ±0.9 for full radial tracking 5. For partial tracking (tracks not reaching the outer radius of the TPC) an acceptance up to η ∼ 1.5 is accessible.
The ALICE TPC is an ideal device for soft physics observables, the momentum resolution is estimated between 1% and 2% for lowmomentum tracks (100MeV/c < pT < 1GeV/c), depending on the magnetic field (see sec.5.1).
The material budget for the TPC is kept low to minimize multiple scattering and secondary particle production. Both field cage and drift gas are made of materials with small radiation length, the material budget of the TPC is 3.5% < X/X0 < 5% for track in the central rapidity acceptance (η < 0.9).
The field cage has a central highvoltage electrode that divides the TPC volume into two parts, and two opposite sets of axial potential dividers (18 field degraders, 1 per sector) to create a uniform electric field in both sides (fig.2.1.2).
The read out chambers are located on the two endcaps of the TPC cylinder, they are standard multiwire proportional planes with cathode pad readout, segmented in
5The TPC is a cylindrical volume with radius r = 2.47 m and elongation in the z direction lz = 5 m, centered around the beamcrossing point (0, 0, 0). Neglecting the displacement of the vertex in the transverse plane (which is of the order of few tens µm), the η acceptance of the TPC is given by η(z0) = − log (tan (θ(z0)/2)), where θ(z0) = tan−1
( r
lz−z0
) is the longitudinal angle under
which the TPC is seen from the interaction point. For events with main vertex at the center of the cylinder the TPC has a symmetric acceptance η ≃ 0.891.
2.1 The ALICE detector at LHC 31
Table 2.2. Synopsis of TPC parameters.
Pseudorapidity coverage −0.9 < η < 0.9 for full radial track length −1.5 < η < 1.5 for 1/3 radial track length
Azimuthal coverage 2π Radial position (active volume) 845 < r < 2466 mm Length (active volume) 5000 mm Segmentation 18 (φ), 2 (r), 2 (z) Pad rows 159 (63 inner pad + 96 outer pad) Material budget X/X0 = 3.5 to 5% for 0 < η < 0.9 Detector gas 88m3 of Ne/CO2 (90%/10%) Drift length 2× 2500 mm Drift field 400 V/cm Drift velocity, time v = 2.84 cm/µs , tmax = 88 µs Position resolution (σ) in rφ 1100 to 800 µm inner / outer radii
in z 1250 to 1100 µm dE/dx resolution, isolated tracks 5.5%
dN/dy = 8000 6.9%
18 sectors in φ with 2 readout chambers per sectors (inner chamber 84.1 < r < 132.1 cm, outer chamber 134.6 < r < 246.6 cm). In total there are 18× 2× 2 = 72 readout chambers, for a total of 159 radial pad rows.
The inactive areas between neighboring inner chambers are aligned with those between neighboring outer chambers to optimize the momentum precision for high momentum tracks, but has the drawback of creating dead zones in the azimuthal acceptance 6 (the detector is nonsensitive for about 10% in φ).
The analog readout of the TPC allows particle identification by dE/dx mea surement, both in the low momentum region, where the expected ionization for particle types is well separated, and at very high pT , due to the relativistic rise in the BetheBloch curve (see sec.2.3). Information about the TPC is summarized in tab.2.2.
6Each sector of the TPC covers ∼ 18◦ degrees in φ, with a gap between two neighboring sectors of ∼ 2◦. This results in ∼ 324◦ degrees of azimuthal coverage and ∼ 36◦ of dead area, located at φ = n× 18◦ ± 1◦.
32 Experimental Setup and Analysis Framework
2.1.3 TRD and TOF The TransitionRadiation Detector (TRD [77]) is located around the TPC (fig.2.1.3 (a)). It provides electron identification in the central barrel for momenta greater than 1 GeV/c by detecting the transition radiation (TR) produced by those particles in the radiator, i.e. the radiation produced by fast particles (with relativistic γ > 1.000) when crossing a boundary between two materials with different dielectric constants.
In the momentum range from 1 to 10 GeV/c, only electrons (and positrons) are highly relativistic due to their small mass. This process causes a larger release of energy in the detector material (due to TR photons), which allows to separate pions from electrons with a misidentification probability of less than 1%.
The TRD fills the radial space between the TPC and the TOF detectors and it also has 2π azimuthal coverage and a pseudorapidity acceptance η < ±0.9. The TRD consists of 6 individual layers, divided into 18 sectors to match the azimuthal segmentation of the TPC. In total there are 18 × 5 × 6 = 540 detector modules, made of a sandwich radiator and a multiwire proportional readout chamber.
Figure 2.4. (a) Cut through the TRD with the TPC inside. (b) TOF sector (supermodule), consisting of five modules inside the space frame which surrounds the TRD.
The last layer with 2π azimuthal coverage in the central barrel is the TimeOf Flight detector (TOF [78]). Its cylindrical surface covers the central pseudorapidity region (η ≤ 0.9) and provides particle identification in the intermediate momen tum range (from 0.2 < pT < 2.5 GeV/c, see sec.2.3).
The time of flight of detected particles is calculated by the delay between the fast trigger signal given by the T0 detector (minus a fixed tT0 = zT0/c) and the TOF signal. This allows particle identification by calculating the invariant mass
2.2 The OffLine Framework 33
with the relativistic formula: m =
ptot βγ
, (2.1)
where the relativistic γ = 1√ 1−β2
, and β is calculated from the TOF signal as: β = ltrk/c × tTOF (ltrk is the length of the track, calculated from the track fit and c is the speed of light). The invariant mass obtained in this way is used to compute the probability of the particle to be of a specific type.
The modular structure of the TOF corresponds to 18 sectors in φ (matching TPC and TRD) and to 5 segments in z (fig.2.1.3(b)). Each TOF module is a Multigap ResistivePlate Chamber (MRPC [90]), which can operate efficiently in extreme multiplicity conditions.
The electric field is high and uniform over the whole gas volume of the detector, any ionization produced by a charged particle passing through will immediately start an avalanche process which will eventually generate the observed signals on the pickup electrodes. There is no drift time associated with the movement of the electrons to a region of high electric field, therefore the time uncertainty of these devices is only caused by the fluctuations in the growth of the avalanche.
2.2 The OffLine Framework Since the analysis presented in this thesis are entirely based on simulated data, the following section describes the simulation and analysis framework in use at ALICE. The ALICE Offline framework, AliRoot, is a full experimental environment built on top of ROOT.
2.2.1 ROOT ROOT [91] is a widely accepted software framework for experimental highenergy physics that offers a common set of features and tools for many domains: genera tion of events, detector simulation, data reconstruction, data storage, analysis and visualization.
It was initially developed in the context of a heavy ion experiment (NA49 at CERN [92]) in year 1995 [93], following the new standards of ObjectOriented programming. The ROOT framework has rapidly taken over most of the old FOR TRAN tools still very popular, and has become an essential software of experimen tal particle physics.
Thanks to the objectoriented approach the system can be easily extended to other domains, e.g. interfaces for remote or distributed analysis (see sec.2.2.4), or the implementation of user defined macros and libraries (the AliFlow package is a good example, see section 3.3).
The builtin C++ interpreter (CINT [94]) provides the possibility to use both
34 Experimental Setup and Analysis Framework
C++ macros and compiled ‘shared object’ libraries 7. ROOT is in fact a versatile system that can be dynamically extended.
In the ALICE collaboration, ROOT has been adopted as the underlying system for data acquisition, simulation and analysis.
2.2.2 AliRoot and the ALICE Offline Project Many collaborations have developed their own ROOT based tools to better satisfy specific needs of the experiments. The STAR collaboration is an example of this approach with the implementation of the Star Class Libraries (SCL [96]). A more radical strategy has been adopted by the ALICE collaboration, giving birth to a complete experimental framework named AliRoot.
Brief History of AliRoot
A Geant3 based simulation program (gAlice [97]) was originally developed for the Technical Proposal of the ALICE experiment at LHC [72]. It was an objectoriented prototype for data reconstruction, mainly written in FORTRAN and built on top of existing Monte Carlo codes (such as GEANT [98] [99] and FLUKA [100]).
After the publication of the Technical Proposal (TP [72]) in 1995, simulations became an essential tool for the detailed design of the detectors and for the develop ment of the the Technical Design Reports (TRD [75–88])) for the various ALICE subdetectors. It was clear that a substantial upgrade of the gAlice package was nec essary. A second version of gAlice was quickly prototyped, still using the ‘Geant3’ simulation program (in FORTRAN) but completely wrapped into a C++ class. This rapid prototyping was possible thanks to the availability of ROOT as framework and to the active support of the ROOT team. The results of this activity was a sui table tool for simulations, which was using at the same time both the advantages of the ObjectOriented programming, and the robustness of the ROOT framework, the output of the simulations were persistent objects that could be stored on disks.
The official adoption of ROOT by the ALICE Offline Project was in November 1998. As a consequence, new C++ versions of the simulation programs started to be developed, together with the digitization and reconstruction code, that was now based on ROOT as a common framework. Since version 3, the name ‘AliRoot’ was adopted and the simulation and reconstruction code was completely rewritten in C++.
The version of AliRoot that has been used in the present thesis is the release v404Rev14. The entire framework is constantly under development [101].
7A ‘shared object’ library (with extension ‘.so’) is the standard format of dynamically linked libraries on the Linux platform, usually compiled with ‘gcc’ [95].
2.2 The OffLine Framework 35
The AliRoot Framework
AliRoot is a complete experimental framework to simulate, reconstruct and analyze heavy ion data in the ALICE environment.
Heavy ion collisions are simulated using a Monte Carlo event generator (see sec.2.2.3). Using the transport code from Geant [99] they are propagated through the detector response simulation packages, and transformed into digitized signals that match the detector layout of real reconstructed data.
The result of this process is the production of ‘raw data’, i.e. data representing the digitalized output of the ALICE detector, that can be submitted to the event reconstruction chain. Simulated data are then processed in the same way as data from the real experiment, the tracking algorithm fits the reconstructed space points (clusters) in each detector and calculates the particle trajectory. Analog detector signals are also associated to the fitted tracks, and the energy loss and the TimeOf Flight signals are used to calculate the Bayesian weights for particle identification (see sec.2.3.2).
A good feature of the transport code, as implemented in AliRoot, is that it keeps track of which simulated particles produced a signal in the sensitive volume of the detectors, by associating the particle’s label to every ‘hit’ produced in the detector. At the end of the simulation, the reconstructed tracks can be compared one to one to the original particles that have been simulated, and this is very useful for calculating the reconstruction efficiency and to optimize the analysis cuts.
The simulation process can be summarized in the following steps:
• Event generation: The collision is simulated by an event generator, which produces an array of finalstate particles with outgoing momenta, propagating from the main vertex of the collision (which can be set at any position along the beam intersection, see sec.5.1.4). This array is called ‘KineTree’, and it is a ROOT TTree structure containing a list of TParticles 8 with their complete kinematic (see sec.2.2.3).
• Particle transport: Particles emerging from the interaction are propagated along the direction of their momenta and the transport code (Geant [99]) simulates the interactions with the detector material (particle decays, parti cle scattering, ionization processes and energy deposition) by calculating the probability of random microscopic processes between the particle and the surrounding. Whenever a secondary particle is produced, it is added to the KineTree and transported as well, the transport code stops when a particle ex its the detector volume or when a low energy threshold is reached (the particle stops). During the transport process, the information contained in the TParti cle is lost and reduced to that generated by a particles crossing the detector.
8The ROOT TParticle class is meant to summarize the sensitive information of a physical par ticle, such as momentum, charge, mass, particle type. For more information see the online ROOT documentation [91]
36 Experimental Setup and Analysis Framework
• Detector response: The energy deposited in the detector is then translated in a detector response (a ‘hit’), according to the geometry of the detector and the implemented detection techniques (this is the ideal detector response).
• Digitization: The detector response is digitized and formatted according to the output of the frontend electronics and the data acquisition system (DAQ [89]), some smearing of the signal due to electronic noise is applied at this step. The resulting data closely resemble the real output that will be produced by the detector.
• Event reconstruction: The reconstruction algorithm fits the reconstructed space points to produce track candidates (AliESDtracks), and retrieves / cal culates all the sensitive information available (fit parameters, energy loss, p.Id. hypothesis), it also extrapolates the interaction vertex and reconstructs neutral decay vertices (see sec.2.3). Each reconstructed event is stored as an ALICE Event Summary Data object (class AliESD).
All the procedure is handled by AliRoot and can be executed at any time using some simple commands and a configuration script (to specify event generator, detector settings and reconstruction parameters). However, a full simulation requires a few hours of computing time, depending on the particle multiplicity and the number of detectors switched on.
2.2.3 Event Generators Since a full and complete description of the processes occurring in heavy ion col lisions has not been achieved yet, AliRoot incorporates a few Monte Carlo event generators, specifically implemented to simulate different physics observables.
The analysis described in this thesis made use of two different event generators (both available in the standard release of AliRoot), ‘Hijing’ and ‘GeVSim’. They will be briefly described in the following to subsections.
Hijing Hijing (Heavy Ion Jet INteraction Generator [102] [103] [104]) is a multipurpose heavy ion event generator implemented in FORTRAN and wrapped into a C++ class to be easily incorporated in the AliRoot framework. At the moment, Hijing offers a very good description of jets production in nucleus nucleus collision, incorporating all known physics effects from a superposition of multiple protonproton collisions, plus some parametrizations of soft physics observables. Its implementation is based on a perturbative QCD inspired model, where multiple minijet production is com bined together with Lund type model for jet fragmentation [105].
In highenergy nuclear interactions, and especially in relativistic heavy ion colli sions, the multitude of hard or semihard parton scatterings result in the production
2.2 The OffLine Framework 37
of an enormous amount of jets, and can be described in terms of perturbative QCD (pQCD). Minijets are expected to dominate the transverse energy production in the central rapidity region.
In Hijing, multiple interactions are calculated using Glauber geometry, and a pa rametrization of the parton distribution function for the nucleus is used to take into account parton shadowing. Jet quenching is modeled using a parametrized energy loss dE/dz of partons traversing the dense medium. The program uses subroutines of PYTHIA [106] to generate the kinematic variables of each hard scattering pro cess and the associated radiations, and JETSET [107] for string fragmentation. Due to its implementation in terms of pQCD, Hijing is only valid for collisions with center of mass energy √sNN above 4 GeV per nucleon, which makes it perfectly suitable for LHC collisions.
However, as a superposition of many pp collisions, Hijing events do not contain any collective effect such as anisotropic flow, while other typical heavy ion obser vables are added by adhoc routines (e.g. the jetquenching effect [108]). Another disadvantage of Hijing is the particle multiplicity, which is too large with respect to the current predictions for LHC (this problem can be taken care of by rescaling the centrality of the collisions, as it is done in sec.4.3).
In the present thesis, Hijing has been used to simulate the background of flow measurement (i.e. nonflow effects) arising from the presence of jetlike correla tions and resonance decays (see sec.4.1). In sec.4.3 and 5.4 collective flow has been added on top of the Hijing simulations by boosting the generated events with the flow AfterBurner (see below).
GeVSim and the flow AfterBurner
GeVSim [109] [110] is a fast and easy to use Monte Carlo event generator, based on the MeVSim [111] event generator developed for the STAR experiment (written in FORTRAN), and reimplemented in C++ for AliRoot.
It does not reproduce the physics of the heavy ion reaction, but simply radi ates user defined particle types out of the primary vertex, with a custom momentum spectrum parametrized with respect to pT , η and φ. The dN/dpT and dN/dη dis tributions can be expressed analytically or with user defined histograms, while the azimuthal distribution is described by two Fourier coefficients v1 and v2 (represent ing directed and elliptic flow, see eq.1.1), which can be expressed as functions of pT and η.
At the present time, GeVSim offers the simplest way to parametrize anisotropic flow in heavy ion events, just by introducing a modulation in the generated dN/dφ distribution with respect to the reaction plane angle. The Fourier expansion of the azimuthal distribution implemented in GeVSim is truncated at the second coeffi
38 Experimental Setup and Analysis Framework
cient, therefore the azimuthal anisotropy is parametrized as:
E d3N
dp3 =
1
2π
d2N
pTdpTdy [1 + V1(pT , y) cos (φ−Ψ) + V2(pT , y) cos (2[φ−Ψ])] .
(2.2) where φ is the azimuthal angle of the particles, Ψ is the reaction plane angle, and Vn(pT , y) (n = 1, 2) are the first and second Fourier coefficients.
The Fourier coefficients can be set separately for each particle type, and they can be constants or functions of pT or η. In particular, the parametrization used in this thesis is:
V1(pT , η) = 0 ,
V2(pT , η) =
{ vsat2 · pT/psatT if pT < psatT vsat2 if pT ≥ psatT
, (2.3)
with v2 (and therefore vsat2 ) assigned with respect to the centrality of the event (see sec.1.3) and psatT = 2 GeV/c. The event plane angle is generated with random orientation (as it will be in real collisions).
Events can be also produced with any other event generator and then boosted with the GeVSim ‘AfterBurner’, to add flow on top of an existing array of final state particles. The AfterBurner is applied to an existing KineTree, and it distorts the dN/dφ distribution according to the specified values of v1 and v2, with respect to an event plane angle that must be specified on an eventbasis.
In sec.4.3 and 5.4 Hijing simulated events have been boosted with the flow AfterBurner, in order to obtain ‘realistic’ heavy ion events with both collective flow and jetlike azimuthal correlations. However, the ‘boost’ is applied on top of the Hijing simulated event, where jet and strong resonance decays already took place (instead, weak and electromagnetic processes are performed in a later stage by the transport code), therefore in our procedure part of the nonflow effects is probably washed away.
The AfterBurner is fed with the same reaction plane generated by Hijing, which is distributed randomly over 2π in azimuth. The magnitude of v2 is calculated as a function of the impact parameter of the collision (after the proper rescaling) using to the hydro parametrization (see sec.1.3).
2.2.4 AliEn and LGC The large amount of data that is going to be produced by ALICE (and more in general by LHC experiments), requires very large storage and computing power. One month of PbPb collision in ALICE will produce roughly 1 Pbyte of data (1 PetaByte = 1, 000, 000 GigaBytes). Thus the construction of LHC required the parallel implementation of a computing infrastructure capable of dealing with such a huge amount of data.
2.3 Track Reconstruction in the Central Barrel Detectors 39
The LCG (LHC Computing Grid [112]) is a network based framework for dis tributing jobs and data over the resources available worldwide (both as CPUs and storage elements).
The ALICE OffLine collaboration has developed its own way to access this grid, the ALICE Environment (‘AliEn’ [113] [114]). Massive event simulations (e.g. Particle Data Challenges or PDCxx [101]) are currently produced through this environment, and during the real experiment the grid will provide the computing power for rawdata reconstruction and distributed analysis. AliEn provides a virtual file catalogue (to access distributed datasets) and different web services such as user authentication, job execution, file transport and performance monitor [115].
During the development of the present thesis, the LCG grid has been used to produce the simulations presented in chapter 5. Some effort has also been devoted to interface the flow analysis package to the AliEn environment, by the implemen tation of an AliFlowTask for the creation of AliFlowEvents from AliESDs, and their consequent analysis (see sec.3.3). In this way the job can be submitted to the grid through a ROOT task manager (the AliTaskManager) for distributed analysis.
2.3 Track Reconstruction in the Central Barrel De tectors
The central barrel detector system of ALICE mainly consist of tracking devices, charged particles going through leave discrete signal at the space points where they pass (‘clusters’), and a reconstruction algorithm fits these space points into track candidates to reconstruct the particle kinematics. This operation is called track reconstruction or tracking.
The combined track reconstruction in the central barrel system collects infor mation from the different subdetectors in order to optimize the track reconstruction performance (the details about the tracking procedure are described in chapter 5 of the ALICE Physics Performance Report [25]).
Reconstructed space points are represented in the global coordinate system of ALICE 9, with the z axis along the beampipe (oriented in the opposite direction with respect to the muon arm), the y axis pointing upward and the x axis to complete a righthanded cartesian system (it points outward with respect to the LCH circle). The origin is defined by the intersection of the z axis with the central membrane plane of TPC.
The track fitting algorithm uses the Kalman filter [116] [117], a general and powerful method for local trackfinding. Tracks are approximated with a ‘helix’ 10
9Note: this is the global coordinate system of the detector. On an eventbyevent basis, the origin of the coordinate system (which tracks and V 0 coordinates refer to) is located at the reconstructed position of the main vertex
10The helix perfectly describes the ideal trajectory of a charged particle moving in a uniform magnetic field, where the Lorentz force acts perpendicularly to the direction of motion.
40 Experimental Setup and Analysis Framework
and parametrized by a set of five parameters, such as the curvature and the angles with respect to the coordinate axes. The Kalman filter performs an iterative fit, by adding the space points found along the trajectory of the helix. The fit parameters are updated at any additional fit point (after some rejection criteria), improving at every step the quality of the fit. The method is suitable for simultaneous track recognition and fitting, and gives the possibility to reject incorrect space points ‘on the fly’. Moreover, the Kalman filter offers a natural way to extrapolate tracks from one detector to another (e.g. from the TPC to the ITS or the TRD). The reconstruction algorithm is fully integrated within the AliRoot framework, and uses the same detector classes involved in the simulation [101].
Track reconstruction is done in three passes:
1st) track finding and fitting inward from the TPC to the ITS: Tracking starts in the outermost pad rows of the TPC, where the space separation between tracks is the largest. Each track seed is calculated from different combinations of pad rows, with and without a primary vertex constraint. Track candidates are then propagated in the TPC using the Kalman filter, the fit continues to the ITS. After all the track candidates from the TPC are assigned to their clusters in the ITS, a special ITS standalone tracking procedure is applied to the rest of the ITS clusters to recover the tracks that were not found in the TPC because of the momentum cutoff, dead zones between the TPC sectors, or decays (however, ITS ‘tracklets’ produced in this way have not been considered in the present analysis).
2nd) the track is propagated outward and reconstruction is invoked for all central detectors: At the end of the first pass, an estimate of the track parameters and their covariance matrix 11 is obtained in the vicinity of the main vertex. The Kalman filter is then applied in the outward direction starting with the ITS, space points with large χ2 contributions are removed from the track fit. Once the outer radius of the TPC is reached, the precision of the track parameters is sufficient to extrapolate the tracks to the TRD, TOF, HMPID and PHOS detectors. Tracking in the TRD is done in a similar way to that in the TPC, tracks are followed till the outer wall of the TRD and the assigned clusters improve the momentum resolution further. Next, the tracks are extrapolated to the TOF, HMPID and PHOS, where they acquire the information for particle identification.
3rd) the track is refitted inward and the ‘best’ track parameters are calculated at the vertex: At last, all the tracks are refitted inward, from their outermost reconstructed space point to the primary vertex (or to the innermost possible radius, e.g. secondary tracks), to each track is associated the analog dE/dx signal coming from the clusters included in the fit, the TRD signal and the
11The covariance matrix of the Kalman fit is a 5 × 5 matrix representing the uncertainties of the fit and their correlation.
2.3 Track Reconstruction in the Central Barrel Detectors 41
Time Of Flight. Tracks that failed the final refit toward the primary vertex are labeled as secondaries and used for the reconstruction of secondary vertices (see section 2.3.3), tracks who succeeded are labeled as constrainable, and both constrained and unconstrained fit parameters are stored.
Reconstructed tracks are stored in an array of combined track objects (class AliESDtrack), and saved into the ALICE Event Summary Data file (class AliESD). Further information is added to the ESD by the reconstruction algorithm of each detector, e.g. primary vertex position (see sec.2.3.1), particle identification (see sec.2.3.2), reconstructed secondary vertices (see sec.2.3.3).
Within the geometrical acceptance of the central barrel detectors, combined track finding has an efficiency well above 90%. The momentum resolution of the combined tracking (in PbPb collisions) is estimated between 1 and 2.5% for trans verse momenta up to 10 GeV/c (see sec.5.1), the angular resolution ∆φ is ∼ 0.2 mrad or even lower at higher momenta (see section 5.1.6 of the ALICE PPR [25]).
The ‘Distance of Closest Approach’ (DCA) to the primary vertex, defined as the extrapolated minimum distance between the fitted helix and the interaction point, has a resolution that depends both on the spatial resolution of the primary vertex (see sec.2.3.1) and on the track precision in the proximity of the interaction point (therefore, the number of reconstructed space points in the ITS). In the case of Pb Pb collisions, where the main vertex is very well defined, the DCA resolution for tracks having 56 clusters in the ITS, is of the order of ∼ 100µm [25] (see also sec.5.2).
2.3.1 Reconstruction of the primary vertex The primary vertex constraint is used at various steps of the tracking procedure. The reconstruction of the primary vertex position is done using the information provided by the silicon pixel detector (SPD).
Collisions occur in the ‘interaction diamond’, parametrized as a wide Gaussian along the z axis (σz = 5.3 cm), with approximately the width of the beam in the xy plane (σx,y ≃ 15 µm to 75 µm, depending on the beam luminosity and lifetime [75]).
The primary vertex algorithm uses the z coordinates distribution of the recon structed space points in the SPD layers to find the centroid zcen around which the distribution is symmetric. When the primary vertex is moved away from the center of the detector (z = 0) an increasing fraction of hits will be lost and the centroid of the distribution no longer gives the primary vertex position, so the final position is calculated from the the correlation between the two centroids z1 and z2 found in the two layers. This procedure has been developed and validated on AliRoot simula tions, and gives a resolution σz ≃ 10 µm for PbPb collisions 12. A similar approach
12Due to the much lower particle multiplicity, in pp collisions the primary vertex is reconstructed using a different algorithm (which works in 3D). The achieved resolution of both σz and σx,y varies
42 Experimental Setup and Analysis Framework
is applied to the reconstruction of the vertex position in the transverse plane, giving a resolution σx,y = 25 µm [25].
The x, y and z coordinates of the primary vertex (in the global ALICE coordi nates system) are stored as an AliESDVertex object in the AliESD.
2.3.2 Particle identification Charged particle identification in the central ALICE detector system is done by combining all the information from ITS, TPC, TRD, TOF and HMPID. The particle identification in ALICE follows a ‘Bayesian’ approach [118], the most efficient way to combine information coming from different detecting systems that are efficient in complementary momentum subranges (see figure 2.3.2), and to combine signals of different nature (e.g. dE/dx, timeofflight, transitionradiation).
Figure 2.5. Detector efficiency for particle identification at different intervals of momen tum, from about 100 MeV/c up to a few GeV/c. The efficiency of the TPC can be extended up to tens of GeV/c, by measuring particle separation in the relativistic rise of dE/dx.
A good introduction to bayesian statistic can be found in the references [119] [120] [121]. The ‘Bayesian’ approach differs from the (standard) ‘frequentist’ ap proach in the definition of probability. In Bayesian statistics the probability is not defined as the frequency of occurrence of an event in a large set of repetitions of identical experiments (as frequentists do), but as the plausibility that a hypothesis is true given the available informations. The ‘probability’ in the Bayesian view is not a property of the random observable, but a quantitative encoding of our state of knowledge about these observables. The main consequence is that, in data analysis, the Bayesian approach can assign probabilities to hypotheses.
Charged particle identification in ALICE implements 5 hypothesis: e, µ, π, K and p (meaning both particles and antiparticles). Each detector class produces the between 50 and 150 µm, depending on the number of reconstructed tracks [25].
2.3 Track Reconstruction in the Central Barrel Detectors 43
conditional probability density function (or detector response functions) r(si) to observe a signal s when a particle of type i (i = e, µ, π,K, p) is detected. It is reasonable to assume that the functions r(si) reflect only properties of the detector and do not depend on other external conditions, like event and track selections.
The probability to be a particle of type i if the signal s is observed, w(is), depends not only on the probability density function r(si), but also on the amounts of this type of particles in the considered sample, i.e. the ‘a priori’ probability Ci to find the particle i in the detector. The quantities Ci (the relative concentrations of particles of type i) do not depend on the detector properties, but reflects the external conditions, like particle ratios and track selections. The underlying assumption of this approach is that Ci and r(si) are not correlated. The detector response function r(si) can be parametrized using available experimental data, e.g. for each track reconstructed in the TPC, r(si) (where s is the assigned dE/dx measurement) is a Gaussian with centroid 〈dE/dx〉 given by the BetheBloch formula and width calculated from simulated data.
The probability of each particle hypothesis is given by Bayes formula:
w(is) = r(si)Ci∑ j=e,µ,pi,... r(sj)Cj
. (2.4)
This method can be extended to combine P.Id measurements from several de tectors, considering the whole system of different contributing detectors as a single block. The combined P.Id weights W (is¯) are calculated in a similar way to eq.2.4:
W (is¯) = R(s¯i)Ci∑ k=e,µ,pi,...R(s¯k)Ck
, (2.5)
where s¯ = sITS, sTPC , sTRD, sTOF , ... is a vector of the signals registered in the various detectors, Ci are the ‘a priori’ probabilities to be a particle of the type i (same as in eq.2.4) and R(s¯i) is the ‘combined response function’ of the whole system of detectors.
The ‘a priori’ probabilities Ci must reflect the relative concentrations of parti cles of itype belonging to the sample of interest. In a simple approach Ci can be assumed to be equal for all i (i.e. same amount of e±, µ±, π±, etc.), however in many cases is possible to do better. For instance it is possible to start with equal ‘a priori’ probabilities for all particles, and update those number event by event with the detected particle ratios. This method has been successfully tested on AliRoot simulations and it shows that the ‘a priori’ probabilities quickly converge [122].
2.3.3 Secondary vertices Thanks to the good spatial resolution of the ITS, the ALICE central barrel detector is capable to reconstruct secondary decay vertices (V 0), cascade decays and kink topologies (i.e. a track deviating from its trajectory due to decay into a neutral plus a charged particle).
44 Experimental Setup and Analysis Framework
The V 0 finding algorithm is executed after the tracking procedure, and runs over the final AliESDtrack objects stored in the ESD. The algorithm starts with the selec tion of secondary tracks, e.g. tracks with a too large impact parameter with respect to the primary vertex. Each secondary track is combined with all the other sec ondary tracks of opposite charge, and different cuts are applied for the positive and the negative track impact parameters. With the helix track parametrization the min imum Distance of Closest Approach (DCA) between the two tracks is calculated, both in 3dimensions and in the transverse plane, pairs of tracks are rejected if their DCA is larger than a given value.
The reconstructed V 0 candidates are then stored in the AliESD as AliESDV0 objects. They can be included in the AliFlowEvent and submitted to the correlation analysis (see sec.3.3).
Chapter 3
Flow Analysis in ALICE
This chapter will give an overview of the flow analysis with the Event Plane Method [123], introducing the terminology and describing the strategy from the experimen tal point of view (sec.3.1 and 3.2). The chapter includes a description of the analysis code as it has been implemented for the ALICE environment (sec.3.3).
Other flow analysis techniques have been also developed (i.e. the Cumulants and the LeeYan zeros), and some of them are currently under implementation at ALICE. A brief overview will be given in sec.3.4.1, pointing out the main advan tages and disadvantages with respect to the event plane method.
3.1 Aim of the Flow Analysis As introduced in sec.1.1, in a non central heavy ion collision, the impact parameter b together with the z axis (the beamline) define the Reaction Plane (see fig.1.3). The azimuthal angle between the reaction plane and the plane x − z (measured in the lab frame 1) is called Ψtrue or ΨR (see fig.1.5).
Due to the geometry of the collision, the overlap region between the two nuclei has an initial spatial anisotropy. This causes an angular dependence of the pres sure gradient (which is larger along the smallest direction of the overlap, i.e. the direction of b) and therefore the evolution of the system follows an anisotropic ex pansion: more particles are radiated along the direction of the reaction plane. The asymmetry observed in the final momentum distribution of the radiated particles is called anisotropic flow (see sec.1.1).
A Fourier expansion of the Lorentz invariant distribution of outgoing momenta is the usual way to characterize anisotropic flow [123]:
E d3N
dp3 =
1
2π
d2N
pTdpTdy
( 1 +
+∞∑ n=1
vn(pT , y) cos [n(φ−ΨR)] ) , (3.1)
1In the laboratory frame z is the beamline direction, y is the vertical direction, and x is the third cartesian axis.
46 Flow Analysis in ALICE
where φ is the azimuthal angle of outgoing particles and ΨR is the reaction plane angle, both measured in the laboratory frame (see also eq.1.1).
The Fourier coefficients vn are then given by:
vn = 〈cos [n(φ−ΨR)]〉 . (3.2) where the average is taken over all particles of all events. For odd harmonics, vn changes sign between forward and backward rapidity because particle distributions are equal within two hemispheres ±y (or ±η in symmetric collision) but opposite in sign for global momentum conservation.
x
y
z
x
Elliptic Flow
Directed Flow
φ−Ψ 0 2pi
2ν 2
dN
dφ
b
Figure 3.1. Left: transverse picture of elliptic flow, projected on the transverse plane (x−y) and side picture of directed flow, projected on the beamvertical plane (z − y). Right: phy sical meaning of v2 as a modulation of the dN/dφ distribution with respect to the reaction plane Ψ.
We call the first Fourier coefficient v1 directed flow and the second coefficient v2 elliptic flow (see sec.1.1). Figure 3.1(a) gives an intuitive picture of these two ob servables, showing the effect of v2 on the transverse plane and the effect of v1 on the beamvertical plane. Fig.3.1(b) shows the physical meaning of v2 as a modulation of the azimuthal distribution dN/dφ with respect to the reaction plane.
Higher harmonics can also be studied, but their magnitude is much smaller. Recent studies have shown that the ratio v4/v22 is an important observable which provides information about the ideal fluid behavior of the system [124]. However, this thesis is devoted to the study of elliptic flow.
The method applied in the analysis is the Event Plane method, introduced by Danielewicz and Odyniec in 1985 [125] and generalized by Poskanzer and Voloshin [123]. It has been successfully used in many heavy ion experiment, from AGS to SPS and RHIC, and in particular by the STAR collaboration who wrote a specific software package (from which the present analysis code has been developed). The event plane method and its implementation in the ALICE environment are exten sively described in the following sections.
3.2 Event Plane Analysis method 47
The event plane analysis implemented for ALICE can been applied to identi fied/unidentified charged particle and to neutral strange particles (K0, Λ0) recon structed as neutral secondary vertices from their decay products. However, due to time limits and to continuous changes in the reconstruction framework, the analysis has been limited to unidentified charged particles (see chap.5).
3.2 Event Plane Analysis method The event plane method is a straight forward consequence of eq.3.2, with the only remark that the true (non observable) reaction plane of the collision is replaced by the experimentally reconstructed ‘event plane’.
Therefore, the first step of the analysis is the reconstruction (on an event basis) of the event plane Ψ from the anisotropy of the event itself.
The ‘observed’ event plane Ψobs, also called Ψn to emphasize the harmonic used in the calculation, approximates the true reaction plane ΨR and can be used as a replacement with the consequences of underestimating the true particleplane correlation, but this can be kept under control (see 3.2.1).
The procedure to extract Ψobs from the emitted particles starts with the recon struction of the flow vector, also called ~Q vector due to the original notation [123], defined for each event as:
~Qn =
(∑ iwi cos (nφi)∑ iwi sin (nφi)
) = Qn
( cos ( nΨobsn
) sin (nΨobsn )
) , (3.3)
where the sum includes all detected particles. However, since not all the particles have the same flow 2, weight coefficients wi
are there to enhance the contribution of particles with larger flow in order to make the ~Q vector a better defined observable. The choice of optimal weights will be discussed in section 3.2.3, anyway it is always possible to use wi = 1 for all the particles.
For the 1st harmonic event plane (which is used to study odd harmonic coeffi cients), the weights wi must have opposite signs in forward and backward rapidity for reflection symmetry 3.
The observed event plane angle of the nth harmonic is given by the orientation of ~Qn:
Ψn = 1
n arctan
( Qyn Qxn
) , (3.4)
by construction Ψn ∈ [ −pi n , pi n
) .
The flow coefficients vn are obtained from the correlation between ~Qn and the momentum of the emitted particles in the transverse plane. At the nth harmonic,
2E.g. the observed pT dependence of v2, see sec.1.3.4. 3In symmetric collisions, the particle distribution is equal but opposite in momentum around the
center of mass, and the average cos(φ) and sin(φ) with φ ∈ [0, 2π) is 0.
48 Flow Analysis in ALICE
this correlation is calculated by averaging the cosine of the difference between the azimuthal angle of the outgoing particle ψi and the event plane angle Ψn:
vobsn = 〈cos [km(φ−Ψm)]〉 . (3.5)
The average is taken over all the selected particles in all events, in the centrality class under study. What is measured in this way is the ‘observed’ flow vobsn , which magnitude is lower than the ‘true’ flow because in general Ψn 6= ΨR.
It is also possible to extract the event plane angle from any harmonic m and use it in the calculation of the flow coefficient vn, with n ≥ m and n = km for an integer k:
vobsn = 〈cos [km(φ−Ψm)]〉 . (3.6) In this way the sign of vn is determined relatively to Ψn, but the resolution deterio rates as k increases [123]. Due to the low sensitivity to v1 with the ALICE central barrel detector 4, this strategy has not been applied in the present analysis.
The difference between the true ΨR and the reconstructed Ψn gives the resolu tion of the event plane, i.e. the accuracy of Ψn to reproduce the true orientation of the reaction plane ΨR.
From the observed vobs2 , the corrected values of the flow coefficients are obtained as:
vn = vobsn
〈cos [km(Ψm −ΨR)]〉 = 〈cos [km(φ−Ψm)]〉 〈cos [km(Ψm −ΨR)]〉 . (3.7)
Following the prescription of the event plane method [123], it is possible to ex perimentally estimate the average 〈cos [km(Ψm −ΨR)]〉 using the subevents (see also sec.3.2.1).
3.2.1 Resolution The (fullevent) resolution of the event plane Ψn is defined as the cosine of the difference Ψn −ΨR. For known value of vn it can be calculated as [123]:
resfull = 〈cos [km(Ψm −ΨR)]〉 = √ π
2 √ 2 χme
− χ2m 4 ×
[ I k−1
2
(χ2m/4) + I k+1 2
(χ2m/4) ] ,
(3.8) where χm = vm/σ, and σ =
√ 1
2M 〈w2〉
〈w〉2 (choosing wi = 1 then χm = vm/
√ 2M , vm
is the true flow). M is the particle multiplicity used in the calculation of ~Q, and Ix are modified Bessel functions of order x.
Since the resolution deteriorates as k increases (eq.3.8), elliptic flow is mea sured better by using the second harmonic event plane Ψ2. Moreover, eq.3.8 is monotonically increasing with χm ∝ v2
√ M . This gives a good resolution for high
4The directed flow increase with rapidity [126] [34], therefore v1 is small in the range of accep tance of the present analysis (η < 0.9), giving a poor resolution on Ψ1.
3.2 Event Plane Analysis method 49
multiplicity and strong flow (midcentral events), and a poor resolution at low mul tiplicity (peripheral events) and weak flow (central events). See section 4.2.
The subevent method to calculate the resolution [123] splits the event into two separated equal multiplicity subevents. They can be randomly chosen or selected by positive/negative pseudorapidity 5.
For each subevent the subevent plane angle ΨAn is calculated in the same way as in 3.3 and 3.4:
ΨAn = 1
n arctan
(∑ i∈Asub
wi sin (nφi)∑ i∈Asub
wi cos (nφi)
) , (3.9)
where the sum is restricted to the particles in the subevent. The difference ∆Ψsub = ΨA − ΨB already gives the accuracy of the measured
subevent plane (or subevent resolution, ressub): ressub =
⟨ cos [ n(ΨAn −ΨR)
]⟩ = √ 〈cos [n(ΨAn −ΨBn )]〉. (3.10)
At very low resolution (〈cos [n(Ψn −ΨR)]〉 ≪ 1) the equation 3.8 is approx imately linear in χm, which is proportional to the square root of the multiplicity M used in the calculation. Taking into account that the fullevent has twice the multiplicity of the subevent, a first estimate of the fullevent resolution is given by:
〈cos [n(Ψn −ΨR)]〉 ≈ √ 2 〈cos [n(ΨAn −ΨBn )]〉. (3.11)
For higher values of the resolution (〈cos [n(Ψn −ΨR)]〉 ≈ 1) this approxima tion does not hold, and to correctly extrapolate the fullevent resolution (with χ and σ of eq.3.8) an iterative process is needed (and it has been implemented in the analysis code): the first estimate of the fullevent resolution from eq.3.8 (if√ 2 × ressub < 1, otherwise the subevent resolution is used) is applied to vobsn to
obtain v′n, which is then used to calculate χn. From equation 3.8 a new value reso lution is calculated and applied again to vobsn to obtain v′′n, and so on. The iteration goes on until the variation at each step becomes smaller than a lower limit, at that point the procedure stops and the last calculated resolution is taken. It turns out that such a procedure quickly converges, and just a few steps are needed to obtain a stable estimate of the fullevent resolution.
3.2.2 Autocorrelation The flow coefficients vn are meant to measure the average correlation between each particle and the rest of the event. However, the presence of the particle i in the
5Other ways to split the event into two separate equal multiplicity subevents can be used, e.g. separating positively and negatively charged particles. Any method could work, as long as no bias is introduced in the azimuthal distribution. In the present analysis only η and random subevents have been used, in the first case particles are simply divided into positive and negative pseudorapidity, in the second case particles are randomly separated into two arrays of equal multiplicity.
50 Flow Analysis in ALICE
calculation of the event plane slightly moves the direction of ~Qn toward the direction of ~pi, introducing a small but not negligible ‘spurious’ correlation between φi and Ψn, and therefore, a bias on the flow measurement.
There are two ways to avoid autocorrelation, both implemented in the flow analysis code:
Subevent correlation: the event is splitted into two subevents and each particle i is correlated to the event plane angle Ψin calculated from the opposite sub event. The average vn is calculated as:
vn = 1
2
( vAn + v
B n
) , (3.12)
where vAn and vBn are calculated as:
vAn = 1
N/2
∑ i∈A
( cos [ n(φi −ΨBn )
]) 〈cos [n(ΨBn −ΨR)]〉
, (3.13)
where the term at the denominator is the resolution of the subevent.
Fullevent correlation: for each particle i the event plane angle Ψn,i is recalculated by subtracting the ~pi from ~Qn. Eq.3.7 is rewritten as:
vn = 1
N
N∑ i=1
vn,i = 1
N
∑N i=1 (cos [n(φi −Ψn,i)]) 〈cos [n(Ψn −ΨR)]〉 , (3.14)
where φi is the azimuthal angle of the particle i, and Ψn,i is the event plane angle, calculated from a selection of particles that does not contain the particle i. The denominator expresses the resolution of the fullevent.
As shown, in the first case vobsn is corrected by the resolution of the subevents (equation 3.10), in the second case vobsn is corrected by the fullevent resolution, calculated from equation 3.8. In other words, for the subevent correlation v2 = vsub2 /ressub (eq.3.10), for the fullevent correlation v2 = vfull2 /resfull (eq.3.8). Since the resolution is proportional to
√ M , resfull > ressub and as well vfull2 > vsub2 . The
ratio between vn and the resolution should compensate for the difference, so that the flow coefficients calculated in both ways are expected to be equal within the statical error 6.
Because the ~Qn vector is better defined when more particles are used in its cal culation (see eq.3.3), the fullevent correlation seems to be the best choice, however the subevent correlation works better in reducing nonflow effects (see section 4.1). Applying the two methods in parallel provides a useful crosscheck.
6This may not be true when nonflow effects are present, see section 4.1.
3.2 Event Plane Analysis method 51
The effect of autocorrelations is larger at lower multiplicity and it becomes smaller when the multiplicity is high (and the bias of a single particle on the direc tion of ~Qn becomes less important). However, if also the ‘true’ flow is small (e.g. central events), the autocorrelations can dominate the measurement.
For simplicity of notation, here and in the following the event plane angle is just written as Ψn, giving for granted the above discussion.
3.2.3 Weights Weight coefficients wi are used in the calculation of the ~Qn vector to make it a better defined observable and increase the resolution of the event plane. Weights should be chosen in such a way to enhance the contribution of particles with higher flow, since they define the direction of ~Qn better. Ideal weights should be proportional to vn itself [127].
Experimentally it is observed that the elliptic flow increases with the trans verse momentum (high pT fragments are more likely radiated along the reaction plane) [128], therefore a good choice of the weights for the calculation of ~Q2 can be the transverse momentum itself or some monotonic function wi(pT ) ∝ pT . In the analysis presented in the following chapters, the choice of the weight was deter mined by the shape of the input v2(pT ) used in the simulation 7, and therefore:
wi(pT ) =
{ pT/p
sat T
1
pT < p sat T
pT ≥ psatT (3.15)
with psatT = 2 GeV/c. This choice gives a small gain in resolution with respect to unitary weights (see ch.4 and 5).
As already mentioned, for odd harmonics of the event plane the coefficients wi must change sign for forward/backward rapidity. The weights for the calculation of ~Q1 can be chosen proportional to y or η (which change sign in the two opposite hemispheres).
The weight coefficients wi must also compensate for the azimuthal anisotropy in the detector acceptance, which may add spurious contributions at higher harmonics to the measured flow. However, such kind of correction is very detector specific and can be directly calculated from the observed dN/dφ distribution of reconstructed data before running the flow analysis (see the following section for the details).
3.2.4 Flattening Weights and Reconstruction Efficiency Due the geometrical arrangement and the segmentation of the detecting volumes (in particular the TPC), the reconstruction efficiency in the ALICE central barrel is φ dependent.
7In the real experiment the choice of weights is done in a later stage: once the shape of v2(pT ) has been reconstructed by running the analysis with unitary weights, results can be refined by applying weights that are proportional to the observed v2(pT ).
52 Flow Analysis in ALICE
(deg) φ 0 50 100 150 200 250 300 350
φ
dN /d
0
50
100
150
200 310× (MC)φdN/d
inCut) ESD
(NφdN/d inCut)
ESD (N’φdN/d
) ESD
(N’’φdN/d
(deg) φ 0 50 100 150 200 250 300 350
w gt
φ
0.8
0.9
1
1.1
1.2
1.3
inCut) ESD
weights (Nφ inCut)
ESD weights (N’φ
Figure 3.2. (a) dN/dφ distribution of (from top): all generated particles (MC input), recon structed tracks and reconstructed primary particles in the ESD passing the minimal event plane cuts (see sec.5.3.2), and all reconstructed secondaries in the ESD. The distribution of reconstructed tracks shows the 18 sectors of the TPC. (b) Efficiency correction (φ weights) calculated with eq.3.16, for all reconstructed tracks passing the minimal cuts, and for re constructed primaries. Plot generated from the all simulated Hijing + GeVSim events (see chapter 5 for the simulation details).
The overall dN/dφ distribution of fig.3.2(a) clearly shows the radial segmen tation of the ALICE TPC. The dips in the distribution of reconstructed primaries correspond to the azimuthal coordinate of the cracks between the 18 sensitive pads on the outer walls of the TPC (see sec.2.1.2). The distribution of secondaries shows a double peak in correspondence of each dip, due to the amount of particles pro duced in the 18 iron bars of field degrader, located between each sensitive pad at the innermost radius of the TPC.
This azimuthal anisotropy in the reconstruction efficiency may introduce a spu rious 18th harmonic component to the observed particle distribution, biasing the direction of the reconstructed reaction plane. To correct for this effect we assume that the cumulative φ distribution from a large sample of events is flat in an ideal detector, this is generally true due to the random orientation of the impact parameter of the collision with respect to the laboratory frame.
This φ dependence of the reconstruction efficiency can be corrected by intro ducing φ weights inversely proportional to the azimuthal efficiency of each φ bin in the reconstructed dN/dφ distribution. Each track i gets the weight wφi calculated as:
w(φi) = 1
Nφi × ∑Nbins
i Nφi Nbins
, (3.16)
where φi is the azimuthal angle at which the track i is emitted, andNφi is the discrete bin in the histogram that contains φi. The obtained weights are then used, together with the pT (or η) weights (see sec.3.2.3), in the calculation of ~Qn.
The φ weights must be calculated specifically for the set of cuts in use, to take
3.3 Implementation 53
into account the reconstruction efficiency of the specific track selection. Moreover, this procedure can be directly applied on real data without any further use of simu lations (this is done, for instance, in the event plane analysis at STAR).
However, the azimuthal efficiency is not the same at all transverse momenta, be ing almost zero for very high pT tracks flying along a crack in the TPC. A more pre cise estimate of the weights should be done in pT bins, creating a two dimensional weight array w(pT , φ), but this approach would require a much larger statistic than the one available and therefore the φ weights in use for the present analysis have been calculated irrespectively of pT .
3.2.5 Differential & Integrated Flow In the present analysis, the elliptic flow has been studied as a global property of the whole event (we talk in this case of ‘integrated’ flow), and with respect to the transverse momentum of the particles pT (we talk in this case of ‘differential’ flow). The dependence of v2 with respect to other kinematic variables, such as y or η, is approximated to be flat (see sec.1.3.4) and it has not been studied further.
Assuming the pT bins used in the analysis are small enough (in the order of the detector resolution, see sec.5.1), we can consider the efficiency as approximately constant in each pT bin. The differential flow is therefore calculated by restricting the average of equation 3.7 to separate kinematic windows:
v2(pT ) = 〈v2〉pT res2
= 〈cos [2(φ−Ψ2)]〉pT 〈cos [2(Ψ2 −ΨR)]〉 . (3.17)
The differential flow coefficients, calculated at each pT bin, describe the pT depen dence of v2.
Existing results show that the differential shape of v2(pT ) is a monotonically increasing function of pT (see sec.1.3.4). In a real experiment the reconstruction efficiency is generally not flat with respect to the transverse momentum, and there fore to correctly calculate the total (integrated) 〈v2〉 of the event, the particle average must be weighted by the reconstruction efficiency as a function of pT :
〈v2〉 = 1 Ntot
∫ ∞ pT=0
v2(pT ) dN ′
dpT dpT =
1
effN obstot
∑ pT bins
v2(pT )× dN obs
dpT × eff(pT ).
(3.18) The contribution to 〈v2〉 from low pT part of the spectra (pT < 100 MeV/c, where the reconstruction efficiency is ∼ 0) is evaluated by extrapolating both v2(pT ) and dN/dpT to pT = 0. See sec.5.3.4 for a practical example.
3.3 Implementation The event plane analysis has been implemented for ALICE as a collection of ROOT C++ classes, under the name of AliFlow package. Its structure is similar to the
54 Flow Analysis in ALICE
StFlow package [96], widely used for flow measurements by the STAR collabora tion, starting from which the AliFlow classes have been developed.
The main object of the analysis is the AliFlowEvent, a high level object built from the ALICE Event Summary Data AliESD and optimized for the flow analy sis. The most useful ESD information is extracted and organized into an efficient structure, which then is submitted to the analysis chain.
Unlike the StFlow package, that was built over a more complex framework (such as the Star Class Libraries SCL [96] [129]), the AliFlow package only depends on ROOT, which improves the portability. AliFlowEvents can be created from the KineTrees contained in the kinematic files produced by the event generators, while their creation from the AliESDs only requires some libraries from AliRoot. AliFlow Events can be stored for later processing, and the analysis can be entirely executed in ROOT.
However, the parallel processing of ESDs and KineTrees and the one to one comparison between reconstructed and simulated particles (from which the effi ciency is calculated) need the AliAnalysisTask machinery to be in place [130], and therefore the whole AliRoot framework (see sec.2.2).
AliRoot
KineTree
LHCGeant3
AliESD
AliFlowMaker AliFlowEvents (Kine)
AliFlowEvents (ESD)
efficiencyEffHist
Transport Reconstruction
(Hijing , GeVSim, ...)Event Generators
AliAnalysisTaskRL
Figure 3.3. Flow diagram of the production chain. Events are generated, transported and reconstructed within the AliRoot framework (the reconstruction phase uses the same al gorithm that will be used on real LHC events). The AliFlowMaker (embedded into an AliAnalysisTaskRL) translates the produced ESDs and KineTrees into AliFlowEvents, on which the flow analysis is later executed (see fig.3.4). Using the same set of cuts applied in the analysis (represented by the small diamond), the efficiency histograms are also filled at this step by a one to one comparison of simulated and reconstructed particles.
3.3 Implementation 55
Q2 , Ψ2
Resolution
φ Weights
AliFlowEvent
AliFlowTracksEvent info AliFlowV0s
v2(pT)
v2 obs(pT)
Q2 A , Q2
B→ ∆Ψ2 sub
Correlation
Flattening dN/dφ
‹ v2 ›
efficiency (pT)
Event Plane
AliFlowSelection
Figure 3.4. Flow diagram of the event plane analysis chain. After a first loop, where the φ weights are calculated from the dN/dφ distribution of all events, the analysis proceeds event by event and calculates the ~Q2 vector and the ‘observed’ event plane angle Ψ2 (for full and subevents). Selected particles (and V 0s) are then correlated to the ‘observed’ event plane to obtain vobs2 as a function of pT , while the event resolution is calculated from subevents as described in sec.3.2.1. At the end of the event loop, the observed v2(pT ) is corrected by the average event plane resolution, and the integrated elliptic flow is calculated taking into account the efficiency corrections, as a function of pT , from the efficiency histogram (see fig. 3.3).
3.3.1 Analysis Strategy The flow analysis package is organized in two main steps (see fig.3.3 and 3.4):
(i) Flow Maker: A parser reads the reconstructed Event Summary Data files and the KineTree files produced by AliRoot and creates AliFlowEvents. Only the most useful observable for the flow analysis are stored as data members of the AliFlow Event class, i.e. few global event observables (main vertex position, particle multiplicity), the kinematic of the reconstructed tracks (pT , η and φ) and the most sensitive variable for selecting good track candidates together with the p.Id. signal from the central barrel detectors. V 0 candidates can be also stored in a separate array.
56 Flow Analysis in ALICE
Very loose quality cuts are applied at this step (e.g. only tracks with TPC signal from the ESD, only primary hadrons from the KineTree). • Data are organized into AliFlowEvent objects, which can be stored on
disk for later analysis. • If the AliESD loop is executed in an AliAnalysisTaskRL or an AliSelec
torRL, efficiency corrections are also calculated 8.
Fig.3.3 gives a schematic view of the creation of AliFlowEvents, starting from an AliRoot simulation.
(ii) Flow Analysis: The analysis runs over AliFlowEvent objects, it can be executed on the fly during the parsing process, or executed later on stored files. Fig.3.4 gives a schematic view of the event plane analysis starting from the AliFlowEvent.
• A first loop on the event sample produces the flattening φ weight his togram (see section 3.2.4), this is usually done over the entire sample available.
• A second event loop performs the calculation of the event plane event by event and the correlation analysis, the observed elliptic flow vobs2 (pT ) of selected particles is stored in a profile histogram (ROOT TProfile).
• At the end of the loop, the resolution of the full and the sub events is calculated by averaging cos(∆Ψ2) from all events in the selected cen trality class. The observed v2(pT ) is corrected by the event plane reso lution, if efficiency corrections are available, the integrated flow 〈v2〉 is also calculated.
In a single loop, different track selections can be used for the calculation of the event plane, the resulting v2 and event plane resolution are calculated in parallel. The selection of tracks used for the event plane determination must be defined previous to the first loop, to correctly calculate the flattening φ weights. A separate selection is applied to the particles entering the correlation analy sis (the ones entering the average 〈cos 2(φ−Ψ2)〉), which can include more strict cuts for the isolation of primaries or for particle identification. The effi ciency corrections must be calculated according to the set of cuts used for the correlation analysis.
The complete list of C++ classes that have been implemented for the event plane analysis is given in appendix A, together with a brief description of their purposes.
8The functionalities of the AliRunLoader (in particular, the access to the AliStack) are needed to make a one to one comparison between reconstructed tracks and simulated particles (see sec.2.3).
3.4 Other Analysis Methods 57
3.3.2 The AliFlow package The AliFlow package is included in the standard release of AliRoot 9, and can be compiled together with the AliRoot framework.
The classes related to the analysis (everything except the AliFlowMakers) can also be exported and compiled as a standalone ROOT library 10, and loaded into ROOT to execute the analysis over existing AliFlowEvents.
A full set of macros, to make the AliFlowEvents, to produce φ weights and to run the analysis, is also included in the package. Every class is provided with an inline documentation that can be compiled with the standard ROOT THtml class to produce a HTML layout [131].
However, due to the many recent modification in AliRoot (e.g. the AliESD has been replaced by the AliAOD, the AliAnalysisTaskRL has been taken out), some of the functionality described above may not be working at the present time. The latest version of AliRoot on which the AliFlow package has been tested to work is v404Rev14.
3.4 Other Analysis Methods Beside the event plane analysis described in sec.3.2, other methods to extract the flow coefficients vn from heavy ions data have been developed in the last years. A brief description of these methods will be given in this section, pointing out their main advantages and disadvantages (see sec.3.4.1).
The event plane analysis can be seen as a particular case of twoparticle corre lation method [132], in this view the analysis can be extended further to 2kparticle correlations calculated with the ‘cumulant’ method [133], and when k is pushed to infinity we end up with the ‘LeeYan Zero’ method [134].
PairCorrelation method
Since all particles are correlated to the reaction plane, they are also indirectly corre lated to each other, the anisotropic flow can therefore be measured by averaging the the observed twoparticle azimuthal correlations, without previous determination of the event plane angle [132].
In this approach, the integrated flow at the nth harmonic is calculated as:
〈vn〉2 = 〈cos [n(φi − φj)]〉 , (3.19) where i and j run over all the particles in the event, and the average is taken over the whole centrality class of interest.
9The package is included into the ‘Physics Working Group 2’ (soft physics) folder, under Ali Root/PWG2/FLOW.
10The package is compiled with ‘rootcint’ [94] and ‘gcc’ [95] as a shared object library (named AliFlow.so). These kind of libraries can be loaded in ROOT (see sec.2.2.1).
58 Flow Analysis in ALICE
From the integrated flow, the differential flow can be calculated as:
vn(pT ) = 〈cos [n(φi − φj)]〉
〈vn〉 , (3.20)
where j is now limited to a specific pT bin, and i runs over all the particles in the event.
The pair correlation method does not need to include corrections for the detector anisotropy [135]. On the other hand, the method does not reconstruct an event plane.
Cumulant method
Eq.3.19 can be seen as the construction of a twoparticle correlator, similar to the 2nd order cumulant. More in general, the cumulant approach considers multiparticle correlations [136].
The cumulant of 2kparticle azimuthal correlations cn {2k} (where n is the har monic, and 2k is the order of the cumulant), is a quantity built with all the measured azimuthal correlations up to order 2k:
cn {2k} = ⟨ ein(φ1+...+φk′−φk′+1−....−φ2k′′ ),
⟩ (3.21) with k′ + k′′ ≤ 2k. For k = 1 the real part of the eq.3.21 reduces to eq.3.19:
ℜ (cn {2}) = ℜ (⟨ ein(φ1−φ2)
⟩) = 〈cos [n(φ1 − φ2)]〉 . (3.22)
The advantage of the cumulant of 2kth order is that it is insensitive to the contri bution of lower order correlations, so that only the genuine 2kparticle correlation remains.
From cumulants is possible to calculate the integrated flow Vn, defined here as the average projection over the reaction plane of the event flow vector ~Qn (see also eq.3.3):
Vn =
⟨∑ j
cos [n(φj −ΨR)] ⟩
= Mvn. (3.23)
Depending on the order of the cumulant, Vn is calculated as:
Vn {2}2 = cn {2} , Vn {4}4 = −cn {4} , Vn {6}6 = cn {2} 4
, . . . (3.24)
Differential flow can also be determined with cumulants, for the details refer to [137].
From the practical point of view, the calculation of cumulants starts from the generating function [137]:
Gn(z) =
⟨ M∏ j=1
[ 1 + wj
( ze−inφj + z∗einφj
)]⟩ , (3.25)
3.4 Other Analysis Methods 59
where z is a complex number and z∗ is its complex conjugate. The product runs over all the particles in each event, and the average is taken over all events. The 2kth order cumulant is given by the coefficient of z2k in a series expansion of the logarithm of Gn. The function Gn is evaluated in a few points in the complex plane around the origin z = 0, and by taking the logarithm at each of these points it is possible to interpolate the next derivatives of ln(Gn) and obtain the cumulants cn {2k}.
Thr elliptic flow calculated with the 2kth order cumulant is only affected by nonflow correlations from 2kparticles, which scales as N1−2k (N is the total mul tiplicity of particles in an event [133]). Therefore, the 4th order cumulant is already enough to systematically remove all the nonflow effects due to two and three particle correlations (e.g. particle decays), but it does not remove any genuine four or more particles correlations (e.g. jets).
Since flow is a collective effect, higher order cumulants can be preferable to re move nonflow correlations (the 2kth order cumulant removes nonflow effects due to (2k−1)particle correlation), but their calculation become more and more tedious as k increases. The cumulant method can be used for different orders and compared to each other to crosscheck the results 11 [141]. Also the cumulant method does not measure an event plane.
LeeYan Zero method
Extending the cumulant method to an infinite order cumulant cn {∞} leads to the LeeYan zero method [134].
Similarly to the cumulant method, a generating function is defined [142]:
Gθ(r) =
⟨ M∏ j=1
[1 + irwjcos (n (φj − θ))] ⟩ , (3.26)
where r is a real positive number and θ is an angle between 0 and π/n. The product involves all the particles in each event, and the average is taken over the events.
The behavior of the zeros in the generating function Gθ reflects the presence of collective flow in the system [134]. In particular the first zero of the generating function is directly related to the magnitude of anisotropic flow.
In practice, the LeeYan zero method starts with the calculation of Gθ for many values of θ, and for each the first minimum of the absolute value
∣∣Gθ(r)∣∣ is cal culated. The value of r at the minimum rθ0 is a good approximation of the first zero.
The integrated flow Vn (as defined in eq.3.23) is then calculated as:
V θn {∞} ≡ j01 rθ0 , (3.27)
11In the contest of STAR, a comparison between v2 {2} and v2 {4} is used to estimate the magni tude of nonflow effects [138] (see also [139] and [140]).
60 Flow Analysis in ALICE
where j01 is a constant (j01 = 2.4, see [142] and [134]). The integrated flow va lues calculated in this way are then used to obtain the flow coefficients at different harmonics and the differential flow [142].
The LeeYan zero method provides the smallest systematic error of all the meth ods, practically removing all nonflow effects (from any kparticle correlation). The main limitation of this method comes from statistical errors, which decrease only logarithmically with the number of events and dramatically depend on the χn (χn ≃ vn
√ 2M , see eq.3.8), therefore it is not always applicable.
In some very recent developments, a way to estimate the event plane using the LeeYan zero method has been devised, by recasting it in a form similar to the standard event plane analysis [143]. Nonflow correlations are eliminated by using the information from the length of the flow vector, in addition to the eventplane angle.
3.4.1 Applicability A detailed discussion about the sensitivity of the three methods, with an estimate of their systematic and their statistical errors, is given in section V and V II of reference [134]. The main conclusions are summarized in the following.
Systematic Error
The main systematic error of flow measurement is due to fewparticle correlations (resonance decays, jets, transverse momentum conservation), generally classified as ‘nonflow’ (see sec.1.4). Nonflow effects become more important at low multipli city and low genuine collective flow (see chap.4). Twoparticle correlation methods, as the event plane method itself, cannot disentangle the genuine flow from any other source of azimuthal correlation, therefore the evaluation of the systematic error due to nonflow has been studied in details in sec.4.1 by applying the event plane method to Hijing simulations without flow. On real data (e.g. at STAR) nonflow effects are measured from the difference between v2 {2} obtained with the event plane method and v2 {4} obtained with 4th order cumulant [138].
The 2kth order cumulant removes nonflow up to (2k − 1)particle correlation, therefore the systematic error due to nonflow becomes smaller and smaller when using higher orders cumulants. The minimum systematic error is reached with the LeeYan zero method.
Roughly, the relative systematic error of the event plane method is:
δvn vn
= O (
1
Mv2n
) . (3.28)
While for the (2k)th order cumulant (with 2k > 4) the systematic error is: δvn vn
= O (
1
M2k−1v2kn
) , (3.29)
3.4 Other Analysis Methods 61
only if vn < 1 M
1− 1 2k−2
, otherwise it becomes of the same magnitude of eq. 3.30. The LeeYan zero method has the smallest systematic error, which is approximately:
δvn vn
= O (
1
M
) . (3.30)
Flow is unambiguously identified when it is larger than other spurious corre lations, this defines the conditions under which any of the three methods can be applied. Therefore, the condition of applicability of the twoparticle correlation method is vn ≫ 1M1/2 , it is vn ≫ 1M3/4 for the 4th order cumulant, and vn ≫ 1M for the LeeYan zero method.
Statistical Error
The statistical error on flow measured with the three methods depends in general on the charged multiplicity and the magnitude of genuine flow, summarized by the parameter χn ≃ vn×
√ M (which is related to the resolution of the event plane, see
eq.3.8), and on the number of events entering the analysis. For values of χn ≫ 1 the relative statistical error associated to the three methods
is in the order of: δvn vn
= O (
1
χn √ N
) , (3.31)
where N is the number of events entering the analysis. The statistical error of the cumulant method becomes much larger for χ < 1,
and it increases with the cumulant order. The LeeYan zero method has the same problem for small values of χ and it is not very reliable for χ < 0.5. The statistical error of the event plane method always has the order of magnitude of eq.3.31, and a larger sample of events may compensate for low values of χn.
At LHC energy the resolution parameter is expected to be well above 1 on a wide range of centrality (see sec.1.3), so in principle all the methods can be applied.
In the present thesis, only the event plane method has been fully implemented and tested so far for the ALICE environment. The event plane method has been chosen because, due to its easy implementation and low level of abstraction, it is the most appropriate for the first day physics at ALICE:
• the method is quite intuitive and the mathematics behind is simpler than that of any other method, therefore it is easier to check the consistency of the results;
• the method provides a direct estimate of the event plane angle Ψn, which is a needed input for other kinds of analysis (e.g. HBT, jets);
62 Flow Analysis in ALICE
• when the analysis is done on simulations, the reconstructed Ψn can be easily compared with the input of the simulations to optimize the applied cuts (see sec.5.3.2);
• theoretical predictions show that elliptic flow will be the dominant contribu tion to the azimuthal anisotropy of the events at LHC energy, minimizing the systematic due to nonflow;
• the resolution parameter χ is expected to be well above unit at LHC, but even in a worse scenario (χ ≪ 1) the event plane method provides the lowest statistical error on the measured vn (making it the most convenient method to quickly obtain some results);
• since the event plane method cannot disentangle genuine collective flow from nonflow correlations, it can be used on Hijing simulations to experimentally characterize nonflow effects (as it is done in sec.4.1).
However, for a longer term analysis at ALICE, the event plane method will be probably replaced by the more accurate ‘new’ LeeYan zero method [143], the implementation of which is currently under development.
Chapter 4
Feasibility of the Event Plane analysis
A large source of uncertainty on the measurement of elliptic flow is due to the un known magnitude of nonflow effects at LHC energies, i.e. fewparticle correlations not related to the reaction plane, such as jets and resonance decays.
In the present approach, nonflow effects have been simulated using Hijing (see sec.2.2.3). A first set of Hijing simulations has been produced with no genuine el liptic flow, in order to study the magnitude of the ‘apparent’ v2 that would be recon structed with the event plane analysis, and to characterize its multiplicity (centrality) dependence (see sec.4.1).
The magnitude of nonflow reconstructed from the Hijing events has been com pared with an extensive set of GeVSim simulations, produced with different combi nations of the two most sensitive observables, i.e. the multiplicity and the magnitude of v2, leading to the feasibility ‘grid’ in fig.4.7 (see section 4.2).
Finally, a new set of Hijing simulations has been produced and boosted with the flow AfterBurner, to study the interplay between genuine flow and nonflow effects (see sec.4.3).
4.1 NonFlow estimate with Hijing Nonflow effects have been studied using Hijing simulations, which by construction have no genuine flow (v2 = 0). This approach offers the possibility to character ize the contribution of nonflow effects alone and to determine their magnitude, by applying the event plane analysis (described in sec.3.2) to the simulated events and comparing the azimuthal correlation between subevents and between the recon structed event plane and the simulated reaction plane.
A full detector reconstruction has been excluded because the aim of this prelim inary study is to isolate and measure the nonflow correlations generated by Hijing. Therefore, the analysis was executed over the primary particles in the Hijing Kine
64 Feasibility of the Event Plane analysis
Trees 1. Hijing, as briefly introduced in sec.2.2.3, includes all known physics effect aris
ing from the superposition of pp collisions, such as jets, resonances and cascade decays. The implementation of Hijing also include the possibility to switch on an internal parametrization of jetquenching effects, which reproduces the energy lost in the medium by the leading parton of the jet.
b (fm) 0 2 4 6 8 10 12 14 16
,K ,p
) pi
(
<0 .5
ηη /d
ch dN
0
2000
4000
6000
8000
↓
hijing j.q. 0<b<16 fm
hijing j.q. 7<b<14.5 fm
,K,p) pi (<0.5ηη/dchdN 0 2000 4000 6000 8000
ev ts
N
1
10
210
310
hijing j.q. 0<b<16 fm
hijing j.q. 7<b<14.5 fm
hijing no j.q. 7<b<14.5 fm
Figure 4.1. (a) Charged multiplicity in one unit rapidity as a function of the impact param eter of the collision. The arrow on the x axis indicates what we consider ‘most central’ in our rescaled definition of centrality. (b) dNch/dη distribution for 25.000 Hijing events with jetquenching and resonance decays switched on, showing both the minimum bias distribu tion (simulated on the full range of impact parameters, 0 < b < 16 fm) and the ‘rescaled’ one (with 7 < b < 14.5 fm). For comparison, the dNch/dη distribution of 25.000 Hijing events, with jetquenching switched off, is also shown (events simulated on the ‘rescaled’ impact parameter range: 7 < b < 14.5 fm).
Few remarks must be made about the current implementation of Hijing:
• The charged particle multiplicity produced is a factor 2 higher than the cur rent predictions for LHC (see sec.1.3.3). As shown in fig.4.1(a), a way to reduce the produced multiplicity is achieved by rescaling the impact param eter. Therefore, we define ‘most central’ a collision with impact parameter b = 7 fm (leading to a charged multiplicity dNch/dη = 2500 ± 300), and ‘most peripheral’ a collision with impact parameter b = 14.5 fm (where the charged multiplicity can be as low as few particles per event: dN/dη ∼ 0) 2.
1The KineTree is the list of all simulated particles (see chap.2.2.2). Detector effects are treated in a separate step (see sec.5.1).
2In this set of Hijing simulations, the multiplicity is about 50% higher than the prediction given in sec.1.3.3, leaving room for the large uncertanties of the extrapolations of dN/dη in PbPb collisions at LHC. The upper limit on b has the purpose of reducing the number of events with very low mutiplicity, since the domain of ultraperipheral collision is outside the scope of the flow analysis (see also sec.1.3).
4.1 NonFlow estimate with Hijing 65
• The rescaling of the impact parameter introduces a clear bias in the event kinematics: a larger impact parameter implies less binary interactions and therefore a lower number of produced jets. Unfortunately the consequences of this approach on nonflow effects are not easy to quantify and the study of particle production processes in Hijing is beyond the purposes of the present thesis. However, a comparison with existing measurements made at STAR (where nonflow is calculated from the difference between v2 {EP} and v2 {4} [139] [140]) shows that the present approach well reproduce both the magni tude and the centrality dependence of nonflow effects.
• The effects of jetquenching and resonance decays (which can be turned on and off in Hijing) have been studied separately to characterize the contribution of each of them to the observed nonflow. One side effect of jetquenching is that the multiplicity becomes, on average, 60 − 70% higher, probably due to the fact that the energy lost in the medium by the leading partons is trans formed into soft radiation (low pT particles). For the selected range of impact parameters, fig.4.1(b) shows the multiplicity distribution with and without jetquenching.
As discussed in sec.3.4 the event plane analysis is equivalent to a twoparticle correlation method, where the azimuthal correlation is quantified by:
⟨ v22 ⟩ = 〈cos [2(φi − φj)]〉 , (4.1)
where the average is taken between each pair of particles i, j in the event. By its definition, any kind of fewparticles correlation contributes to the reconstructed v2.
According to the definition given in sec.1.4, the correlation between the recon structed event plane and the true reaction plane (i.e. Ψtrue2 − Ψobs2 ) due to ‘flow’ can be compared to the observed subevents correlation ∆Ψsub2 = ΨA2 − ΨB2 to es timate the magnitude of nonflow effects. When the ‘true flow’ is 0, the observed subevents correlation gives a direct measurement of the ‘nonflow’ contributions.
Fig.4.2 shows the average cos [2 (∆Ψ2)] for four sets of 25000 Hijing events each (with 7 < b < 14.5 fm), testing all possible combinations of jetquenching and strong resonances decays on/off. The leftmost bin shows the ‘true’ resolution of the 2nd harmonic event plane, defined as:
⟨ cos [ 2 ( Ψtrue −Ψobs2
)]⟩ , (4.2)
while the following 3 bins (bin 2, 3 and 4) show the ‘observed’ resolution, extra polated from the observed subevent correlation ∆Ψsub2 using an iteration of eq.3.8 (see sec.3.2.1). Approximately:
〈cos [2(Ψ2 −ΨR)]〉 ≈ √ 2 〈cos [2(ΨA2 −ΨB2 )]〉 , (4.3)
66 Feasibility of the Event Plane analysis
True rndsub subη sub (gap)η
)> Ψ∆
< co
s(2
0
0.1
0.2
0.3
0.4
0.5
0.6

RHijing  DcayRHijing  JQRHijing  JQ + DcayRHijing
e.p. Resolution
True rndsub subη sub (gap)η
)> Ψ∆
< co
s(2
0
0.1
0.2
0.3
0.4
0.5
0.6

RHijing  DcayRHijing  JQRHijing  JQ + DcayRHijing
wgt) T
e.p. Resolution (p
Figure 4.2. True and observed event plane resolution (defined as ⟨cos [2 (Ψtrue −Ψobs2 )]⟩, see sec.3.2.1), for 4 different sets of 25000 Hijing events (with 7 < b < 14.5 fm), in all possible combinations of jetquenching and strong resonance decays on/off, using different definitions of subevents (see text for the details). The two plots show the resolution of the event plane calculated without and with pT weights respectively.
where ΨA2 and ΨB2 are the event plane angles calculated from two equal multiplicity subevents. This is done for three different choices of subevents (random sub events, η subevents and η subevents with a gap of 1 unit at midpseudorapidity). In fig.4.2(a) the 2nd harmonic ~Q vector is calculated with unitary weights, while in fig.4.2(b) pT weights are used (see sec.3.2.3).
It can be immediately noticed that the presence of jetquenching effects intro duces a small correlation with the true reaction plane Ψtrue (the ‘true’ resolution is larger, 1st bin in fig.4.2(a) and (b)), and as expected, the use of pT weights in the calculation of ~Q2 enhances the resolution of the event plane for both genuine flow or spurious nonflow effects (see also fig.4.3).
But the most interesting result is the large amount of nonflow correlations that is observed using the event plane method. The figure, however, indicates a way out: due to the fact that jets and decays cause particles to propagate in the same direction in φη 3, by splitting the event into two rapidity or pseudorapidity intervals most of the nonflow correlations are suppressed, and an even better suppression is achieved by choosing the subevents well separated, e.g. by using a gap at midrapidity (but consequently the statistic is reduced).
It is also interesting to note that the presence of a detectable event plane (due, in this case, to the correlation of the products of jetquenching with the true Ψ) makes the observed event plane resolution less sensitive to the choice of the subevents (see second and last bin of fig.4.2(a)). This even better applies when the genuine elliptic flow is larger (see sec.4.3). In other words, for a large genuine elliptic flow signal the nonflow correlations become less important.
3See, for instance, chap.6.8 of the reference [25].
4.1 NonFlow estimate with Hijing 67
Following the prescriptions of the event plane analysis [123], from the observed values of cos
[ 2 ( ∆Ψsub2
)] the resolution of the event plane is calculated, which
corrects the observed vobs2 to take into account the uncertainty in the determination of the reaction plane (see sec.3.2). In case nonflow is dominating, the ‘apparent’ event plane resolution (which is always small, since the reaction plane is not clearly defined) gives a large boost to the observed vobs2 (which is nonnegligible, due to few particle correlation effects) leading to overestimate of the measured elliptic flow.
(GeV/c) T
p 0 1 2 3 4 5 6 7 8 9 10
(%
) 2
v
0
5
10
15
20
25  noJQ noDcaytrue2v
obs 2 v
res 2 v  JQ Dcaytrue2v
obs 2 v
res 2 v
Figure 4.3. Elliptic flow v2 as a function of pT calculated w.r.t. the true reaction plane (squares) and the observed event plane with and without resolution correction (triangles and circles, respectively), the resolution is calculated from ∆Ψrnd−sub2 . Two sets of 25000 Hijing events are shown (7 < b < 14.5 fm), one with jetquenching and strong resonance decays switched off (empty markers), the other with everything switched on (full markers).
The transverse momentum dependence of nonflow effects is shown in fig.4.3. The lower sets of points show the shape of v2 as a function of pT calculated with respect to the true reaction plane. The presence of jetquenching introduces a small (〈v2〉 ≃ 0.1%) true flow effect increasing with pT , which is completely absent when medium effects are switched off.
The two sets in the middle represent the pT dependence of the ‘observed’ non flow (vobs2 calculated with respect to the observed 2nd harmonic event plane), which magnitude is roughly the same for both the inputs. Nonflow effects in Hijing are dominated by jetlike correlations, so that the small correlation with the true reaction plane due to jet quenching is washed away almost completely.
The upper sets of points show the reconstructed v2 after applying the resolution correction (calculated from random subevents). The use of this ‘apparent’ event plane resolution (only due to nonflow) results in very large values of the recon structed v2. Also the measured v2 slowly increases with pT , showing a saturation value at about 2 GeV/c (where v2 ∼ 10− 15%). However the increase is not linear, and v2 is very small for pT . 1 GeV/c, where most of the particles are produced. This leads to an integrated nonflow 〈v2〉 ≃ 1.4%, with a difference of ±0.3% de pending on the different settings of hijing 4.
4Note that this is a particlewise average for all the ‘rescaled’ Hijing events, with multiplicities
68 Feasibility of the Event Plane analysis
(GeV/c) T
p 0 1 2 3 4 5 6 7 8 9 10
(%
) 2
v
0
10
20
30
40
50
= 0100 % (average)totσ < 100)η = 020 % (dN/dtotσ
< 670)η = 4060 % (260 < dN/dtotσ > 1450)η = 80100 % (dN/dtotσ
Figure 4.4. Nonflow v2 as a function of pT for 3 centrality classes (for Hijing events with jetquenching and resonance decays on).
Since nonflow effects in Hijing arise from fewparticle correlations such as jets, one might argue that their magnitude is inversely proportional to the total multipli city. In fact, eq.4.1 states that the presence of many randomly oriented particles washes away the correlation. A clear example of this is found in fig.4.4, where the pT dependence of v2 is plotted together for 3 well separated centrality classes (most peripheral 0−20%, midcentral 40−60%, and most central 80−100%). The lower multiplicity implies the higher nonflow effect.
The subevent method provides a way to quantify the amount of nonflow cor relations. Fig.4.5(a) shows that the subevent correlation due to nonflow is approx imately independent of the multiplicity of the event, and its magnitude depends on the choice of the subevents (see caption). Eq.4.3 quantifies the azimuthal correla tion within the event and, in case of genuine flow, is used to calculate the resolution of the event plane 5 [123].
In the present simulations v2 = 0, therefore the above average is a direct mea surement of nonflow effects, which can be expressed by the parameter g˜2 in eq.4.4 [136]: ⟨
cos [ 2(ΨA2 −ΨB2 )
]⟩ = Msub(v
2 2 + g2) =
M
2 v22 +
1
2 g˜2 . (4.4)
The obtained results on g˜ are compatible with the experimental estimate of nonflow made at STAR [139] [140].
Eq.4.4 also makes clear that the nonflow contributions do not simply add to v2, but to M/2× v22 . This is what we observe in fig.4.5(b), i.e. 〈v˜2〉 =
√ g˜2/M , with g˜2
obtained from cos [ 2 ( ∆Ψsub2
)] , using random subevents (g˜2 ≃ 0.07). The two data
sets represent the subevent correlation method with η subevents and the fullevent
distributed as in fig.4.1(b). The values of 〈v2〉 are obtained by integrating v2(pT ) × dN/dpT (vres2 of fig.4.3 convoluted with the generated dN/dpT spectrum).
5The 2nd harmonic event plane resolution for the subevents is immediately given by ressub 2
=√⟨ cos [ 2 ( ΨA
2 −ΨB
2
)]⟩ , while the fullevent resolution is obtained using eq.3.8 (see sec.3.2.1).
4.2 Flow simulation with GeVSim 69
correlation method with random subevents (see sec.3.2.2). As expected, the sub event correlation method gives a lower observed v2, which is partially compensated by the lower observed resolution calculated from η subevents.
ηdN/d 0 1000 2000 3000
)> 2 Ψ∆
< co
s(2
0
0.02
0.04
0.06
0.08
0.1 )>rnd2Ψ ∆<cos(2 )>η2Ψ ∆<cos(2
)> gapη2Ψ ∆<cos(2
ηdN/d 0 1000 2000 3000
> %
2
< v
0
2
4
6 > %sub2<v
> %full2<v
/M 2
g~
Figure 4.5. (a) cos [2 (ΨA2 −ΨB2 )] with respect to the charged particle multiplicity at mid rapidity, for three different definitions of subevents (random,±η, and±η with a gap at mid rapidity). (b) Observed v˜2 from pure nonflow correlations vs dNch/dη, calculated with the sub and fullevent correlation methods (using η and random subevents respectively). The dashed line shows the upper limit on nonflow (g˜2 ≃ 0.07 calculated from random sub events).
Since the magnitude of cos [ 2 ( ∆Ψsub2
)] changes significantly for different defi
nitions of subevents (see fig.4.5(a)), with the proper choice of the analysis settings it is possible to minimize the effects. In particular, splitting the event into positive and negative rapidity suppress most of the nonflow correlation, and even more sup pression is achieved by cutting away a slice at midrapidity (limiting the analysis to 0.5 < η < 1), however this solution reduces the statistic and worsen the resolution of the reconstructed event plane.
In a realistic situation, the best choice is to use the fullevent correlation method and to extrapolate the resolution from η subevents (see sec.5.4).
4.2 Flow simulation with GeVSim As explained in the previous chapter, the accuracy of the event plane method in reconstructing the elliptic flow coefficient v2 depends on two ingredients, particle multiplicity and magnitude of the elliptic flow 6. These two quantities are combined together to give the parameter χ2 = v2×
√ 2M which is used to calculate the event
plane resolution [123]. The event plane method has a relative systematic error on the measured v2 proportional to 1/Mv22 ∝ 1/χ22 [134]. Unlike the other approaches,
6An estimate of the systematic uncertainty of different flow analysis methods was given in section 3.4, see also [134].
70 Feasibility of the Event Plane analysis
the event plane method has (in principle) no lower limit on χ2. However, when the uncertainty on v2 becomes of the same order of magnitude of the measured value, the method is not reliable any more.
To avoid any bias from theoretical predictions and from the detector acceptance, this feasibility study has been performed on a wide sample of pure Monte Carlo simulations (without full detector reconstruction), produced with different combi nations of multiplicity and v2. A total of 49 sets of GeVSim events have been simu lated in 7 centrality classes (with dN/dη ranging between 100 and 10000 particles per unit rapidity) times 7 different input values of v2 (with vsat2 = 1% to 50%).
ηdN/d 100 200 500 1000 2000 5000 10000
%
o bs
> 2 <
v
0 2 4 6 8
10 12 14 16 18
16.8
10.1
6.7
3.4
1.7
0.7
0.3
ηdN/d 100 200 500 1000 2000 5000 10000
)]> tr
ue Ψ
 2 Ψ
< co
s[2 (
0
0.2
0.4
0.6
0.8
1 16.8
10.1
6.7
3.4
1.7
0.7
0.3
Figure 4.6. Observed Elliptic Flow vobs2 (a) and fullevent plane resolution (b), with respect to the charged multiplicity at midrapidity. The 7 data sets represent the 7 input values of 〈v2〉 listed on the right side of the plot (see tab.4.1).
The parametrization of v2(pT ) is described in sec.1.3.4, i.e. v2 increases linearly for pT < 2 GeV/c and becomes constant on its saturation value at pT ≥ 2 GeV/c. The particle spectrum is an exponential pT distribution with slope parameter (tem perature) T0 = 250 MeV, flat in pseudorapidity (model n.1 in GeVSim, see [109]), generated in the kinematic range 0 < pT < 10 GeV/c and η < 1. The charged particle composition is 80% pions, 10% kaons, and 10% protons/antiprotons 7 (the settings are the same for all multiplicities).
In this way, the integrated v2 is linearly proportional to its saturation value:
〈v2〉 = 1Ntot ∑Ntot
i=1 v i 2 =
1 Ntot
(∫ 2GeV/c 0
vsat2 dN dpT
pT psatT
dpT + ∫ 10GeV/c
2GeV/c v sat 2
dN dpT
dpT
) =
= vsat2 × [
1 Ntot
(∫ 2GeV/c 0
dN dpT
pT psatT
dpT + ∫ 10GeV/c
2GeV/c dN dpT
dpT
)] = vsat2 × ks→i .
(4.5)
The above integral gives ks→i = 1/2.98. Integrated and saturation values of v2 for the present set of simulations are summarized in table 4.1.
7This is slightly different from the particle composition generated by Hijing (see sec.5.4), how ever the magnitude of v2 is the same for all particle species, and the detector acceptance is not taken into account.
4.2 Flow simulation with GeVSim 71
Table 4.1. Saturation v2 and the resulting integrated flow. vsat2 (%) 1 2 5 10 20 30 50
〈v2〉 (%) 0.3 0.7 1.7 3.4 6.7 10.1 16.8
The number of events produced for each sample is inversely proportional to the particle multiplicity (so that the analysis always involves the same number of particles): Nevts × dN/dη = 2 · 107.
The event plane analysis has been applied to charged primary particles (π±, K±, p and p¯) with η < 1. Figure 4.6 summarizes the results on the observed vobs2 and the observed event plane resolution. For higher multiplicity and higher v2 the resolution saturates (i.e. res2 ∼ 1) and the measured elliptic flow becomes more accurate (the observed v2 very well approximates the generated one and the resolu tion correction becomes negligible).
M 210 310 410
> %
2
< v
0 1 2 3 4
5 6 7
e.p. feasible e.p. unfeasible e.p. failed
/M 2
g~ = 2v~nonflow subevents∆
Figure 4.7. Feasibility of the event plane analysis with respect to the particle multiplicity and the genuine 〈v2〉. The dashed lines represent the observed ‘nonflow’ v˜2 =
√ g˜2/M ,
calculated from pure nonflow effects in Hijing using different definitions of subevents (see sec.4.1), the central value is g˜2 = 0.05, measured from η subevents. Each marker represent a set of GeVSim simulations, with M and 〈v2〉 given by the x and the y coordinate respectively. The measured χ2 = v2
√ 2M is compared to
√ g˜2 and the shape of the marker
is assigned accordingly. The event plane analysis fails when the calculated resolution is immaginary (i.e. ⟨cos [2 (∆Ψsub2 )]⟩ < 0).
Using the reconstructed values of χ2 from the above simulations (sec.4.2) and the magnitude of nonflow effects estimated with Hijing (sec.4.1) it is possible to draw figure 4.7, which summarizes the feasibility of the event plane analysis for any possible scenarios of multiplicity and v2.
The event plane method only has problems at low values of the integrated v2
72 Feasibility of the Event Plane analysis
(i.e. 〈v2〉 . 2%) and at low multiplicities, it works fine elsewhere.
4.3 Flow + nonflow To study the interplay between flow and nonflow effects, a set of 50.000 ‘rescaled’ Hijing events has been produced (7 < b < 14.5 fm, with jetquenching and res onance decays on, and no full detector reconstruction) and boosted with the flow AfterBurner (see sec.2.2.3) using values of elliptic flow extrapolated from the low est hydrodynamic estimate (with c2s = 0.22, see sec.1.3.2), v2 is assigned to each event with respect to its impact parameter b. These KineTrees could represent the input for a realistic scenario, where both flow and nonflow effects are present and v2 has a continuous dependence on the impact parameter.
ηdN/d 0 500 1000 1500 2000 2500 3000
> %
o
bs 2 <
v
0
2
4
6
8
10
> (observed)obs2<v th
res×> in2<v
>2observed <v
ηdN/d 0 500 1000 1500 2000 2500 3000
)]>
tr ue
Ψ 
o bs 2
Ψ <
co s[2
(
0
0.2
0.4
0.6
0.8
1
)sub2Ψ ∆ (from obsres )]>trueΨobs2Ψ = <cos[2(trueres
(from eq.)thres
e.p. Resolution
Figure 4.8. (a) Observed v2 vs dN/dη from 50.000 simulated Hijing events (with jet quenching and resonance decays) boosted with the flow AfterBurner, the expected values of vobs2 (vobs2 = vin2 × resth) are shown as well (see tab.4.2). (b) Theoretical, true and ob served event plane resolution vs dN/dη (calculated using eq.3.8, Ψobs2 −Ψtrue and ∆Ψηsub2 respectively).
The centrality class selection is based on the multiplicity of final state particles (as it would be in a real experiment): the dN/dη distribution has been divided in ten intervals, each one containing approximately 10% of the total number of events (i.e. 10% of the total inelastic cross section).
Due to the fluctuations involved in the particle production processes (see also sec.5.4), each multiplicity class contains events in a large range of impact parameter, and consequently, with a large spread in v2. This also reproduce a more realistic vRMS2 , and therefore a better estimate of the statistical error on the measured v2 (see also sec.5.5).
Fig.4.8(a) shows the integrated values of vobs2 (the observed v2 without resolution correction, see 3.2): ⟨
vobs2 ⟩ = 〈cos [2 (φi −Ψ2)]〉 . (4.6)
4.3 Flow + nonflow 73
Table 4.2. Summary table of the 50k Hijing + AfterBurner simulations. Per each centrality class, both the input and the reconstructed v2 and resolution are listed.
σtot % dNch dη max
dNch dη min
⟨ dNch dη
⟩ 〈vin2 〉 resth 〈vmeas2 〉 resobs
0− 10 3000 1880 2322 2.37% 0.767 2.60% 0.844 10− 20 1880 1350 1603 5.16% 0.934 5.14% 0.955 20− 30 1350 960 1147 7.29% 0.956 7.23% 0.973 30− 40 960 660 802 8.86% 0.957 8.73% 0.976 40− 50 660 440 545 9.72% 0.946 9.65% 0.972 50− 60 440 280 355 9.81% 0.914 9.85% 0.955 60− 70 280 170 221 9.02% 0.826 9.25% 0.905 70− 80 170 100 133 7.39% 0.639 8.18% 0.784 80− 90 100 50 74 5.44% 0.393 7.29% 0.608 90− 100 50 0 28 3.56% 0.167 8.09% 0.400
The figure also shows the ‘expected’ values of the observed ⟨ vobs2 ⟩ , calculated per
each centrality class as 〈vin2 〉 times the expected resolution resth, calculated from the input values of v2 and multiplicity using eq.3.8 (see tab.4.2).
ηdN/d 0 500 1000 1500 2000 2500 3000
> %
2
< v
0
2
4
6
8
10 > measured2<v > input2<v
> (nonflow)2 <v∆
Figure 4.9. Measured and simulated v2 vs dN/dη for the 50k Hijing + AfterBurner events (see tab.4.2), nonflow affects are calculated as the difference between the two values (for a comparison with experimental results at lower energy, see the reference [139]).
Nonflow effects are responsible for the larger magnitude of the reconstructed vobs2 with respect to the simulated one. The same happens to the resolution of the event plane calculated from subevents (fig.4.8(b)), where we see the same syste matic effect. In middle and most central events (500 < dN/dη < 2000) the genuine elliptic flow is much higher than the nonflow contribution (Mv22 ≫ g˜, see eq.4.4) and therefore the analysis perfectly works. More peripheral events (dN/dη < 500)
74 Feasibility of the Event Plane analysis
are an example situation where Mv22 ∼ g˜2, and therefore the event plane analysis leads to an incorrect result (lower left corner of fig.4.7).
Finally, fig.4.9 shows the systematic increase in the measured values of v2 due to nonflow effects. The difference between simulated and reconstructed v2 is an estimate of the nonflow contributions to the measurements, assuming the centrality dependence of v2 is described by the hydro extrapolation. For a comparison with experimental results from STAR, see the references [139] and [140].
Chapter 5
Simulations & Results
The event plane analysis has been studied for leadlead collision at LHC by means of an elaborate set of AliRoot simulations, in order to determine its feasibility with the ALICE detector.
Using the parametrizations described in sec.1.3, particle multiplicity and elliptic flow have been extrapolated to LHC energy under three different assumptions on the impact parameter dependence of v2 (considered as the upper/lower limit of the existing predictions). Using these extrapolations, three sets of fully reconstructed GeVSim events have been produced in a few centrality classes, and the event plane analysis has been optimized including detector effects.
The study of the analysis cuts, their optimization, and the efficiency corrections are discussed in sec.5.1 and 5.2 respectively, while the results of the flow analysis of the GeVSim events are presented in section 5.3 (including an estimate of the systematic error of the measurement).
Using the extrapolation with the lowest v2 (see section 1.3), a set of Hijing events with flow AfterBurner has been simulated and fully reconstructed, leading to a complete set of data including both flow and nonflow effects. In this more rea listic scenario, the reconstructed values of v2 have been compared to the simulated ones, and the systematic effects due to nonflow have been calculated (see sec.5.4).
5.1 Efficiency study In a real experiment, the accuracy of the event plane analysis depends on the de tector performance. In the present approach, detector effects are quantified by two main ‘estimators’, which can be both studied using Monte Carlo simulations with full detector reconstruction (as provided by the AliRoot framework, see sec.2.2.2):
• the reconstruction efficiency (i.e. how many primary stable particles are actu ally reconstructed by the detector),
• and the purity of the sample (i.e. how accurate is the match between the reconstructed tracks and the simulated primary particles).
76 Simulations & Results
The aim of the present analysis is to measure both differential and integrated elliptic flow of unidentified charged primary particles produced in the interaction, therefore the track selection is optimized for selecting primary stable hadrons. We define as ‘stable’ a particle that lives long enough to reach the ALICE TPC and can be fully reconstructed [71] (i.e. π±, K±, p and p¯).
5.1.1 Efficiency & Purity For any applied cut, NESD is the total number of reconstructed AliESDtracks pas sing the cut, and N ′ESD is the number of ‘correctly reconstructed’ primary tracks passing the cut (i.e. tracks which are reconstructed from primary stable hadrons within the same pT bin of the generated particles, see below). N ′MC is the number of primary stable hadrons (π±, K±, p and p¯) generated within the acceptance of the detector (i.e. pT > 0.1 GeV/c, η < 0.9 and 0 ≤ φ < 2π 1, see sec.2.1).
(GeV/c) T
p 0 2 4 6 8 10
T
/ p T
p
∆
0.01
0.02
0.03
0.04
(GeV/c) T
p 0 2 4 6 8 10
T
p ∆
2. 5
0.1
0.2
0.3
0.4
0.5 )MC T
pESD T
(p×2.5 )
T (1+p×0.05
Figure 5.1. (a) Transverse momentum resolution, defined as 〈∆pT 〉 /pT , where 〈∆pT 〉 is the RMS of the ∆pT distribution of charged primary hadrons in the central barrel detector. (b) ∆pT as a function of pT and its linear approximation (at 2.5 RMS, ∼ 99% of the ∆pT distribution).
Fig.5.1(a) shows that the relative transverse momentum resolution of the TPC, ∆pT/pT , weakly depends on pT in the momentum range of interest (pT = 0.1 to 10 GeV/c), therefore the condition for a track to be reconstructed in the same pT bin as the simulated primary particle can be approximated as:
∆pT < w0 × (1 + pT/GeV/c), (5.1)
where ∆pT = pT (ESD)− pT (MC). The ∆pT distribution is roughly Gaussian for each pT , with a longer tail on
the right side due to the statistically larger abundance of low momentum tracks 1Detector cracks are not taken into account in the definition of efficiency and purity.
5.1 Efficiency study 77
reconstructed at a higher pT (however this effect is less then 1% for primary tracks and can be neglected).
The parameter w0 of eq.5.1 is chosen to linearly approximate the observed RMS in ∆pT as a function of pT (fig.5.1(b)) at 2.5σ (∼ 2.5RMS in ∆pT ), where 99% of the track candidates are found: w0 = 50 MeV/c (with pT expressed in GeV/c). With the parameter w0, eq.5.1 defines the minimum bin size in pT , such that the number of particles reconstructed in the wrong bin is negligible (. 1%) 2.
Eq.5.1 approximates the requirement that in the final histograms a reconstructed track enters the same pT bin as the simulated particle.
The efficiency is defined as the number of primary charged hadrons (π±, K±, p and p¯) correctly reconstructed divided by the number of primary charged hadrons generated in the acceptance:
eff = N ′ ESD
N ′MC . (5.2)
For a specific particle type, the efficiency is a detector property which depends on the detector configuration, the geometrical acceptance, the reconstruction algorithm and the applied cuts (see sec.5.1.2).
The purity is defined as the number of correctly reconstructed primary tracks divided by the total number of reconstructed tracks within the cut:
pur = N ′ ESD
NESD . (5.3)
Without considering the experimental determination of the particle identification, the purity quantifies the level of contamination of the reconstructed spectra (e.g. from secondaries and from track reconstructed at the wrong momentum). It depends on the applied cuts and on the simulated input spectra 3.
By definition, both efficiency and purity are smaller than 1 (the particles counted by numerator are a subset of the ones counted by the denominator). We will con sider efficiency and purity differentially (as a function of the transverse momentum pT ), and integrated (over the range of interest of the present analysis, i.e. between pT = 0.1 and 10 GeV/c).
The correction to the observed spectra (when the simulated one is given) is expressed by the ratio:
corr = eff pur
= N ′ESD N ′MC
× NESD N ′ESD
= NESD N ′MC
. (5.4)
In a situation of known input spectra (where efficiency and purity could be deter mined exactly), the original signal is exactly recovered by dividing the reconstructed
2Due to the limited statistic, the present analysis used a pT bin size at least twice as large as the lower limit given above.
3Both the number of secondaries and the contamination from other pT bins depend on the number of primary particles generated at each pT .
78 Simulations & Results
spectra by this factor, i.e.:
d3N
dpTdηdφ =
1
corr(η, pT , φ) × dN
obs
dpTdηdφ . (5.5)
In reality such a correction factor can only be determined by simulating a reali stic input spectra, which should be modeled with respect to observed experimental data, not available yet in ALICE. Therefore the effect of impurities is absorbed into the systematic error and the only correction applied is the detector efficiency as a function of pT .
5.1.2 Particle Composition The reconstruction efficiency (and its transverse momentum dependence) is diffe rent for different particle species, and the particle composition of the sample is not known and moreover is not constant as a function of pT .
(GeV/c) T
p 0 1 2 3 4 5 6 7 8 9 10
T dN
/d p
310
410
510
610
710
810
±pi KineTdN’/dp ESD
T dN’/dp
ESDTdN/dp
(GeV/c) T
p 0 1 2 3 4 5 6 7 8 9 10
T dN
/d p
310
410
510
610
710
KineTdN’/dp ESD
T dN’/dp
ESDTdN/dp
± K
(GeV/c) T
p 0 1 2 3 4 5 6 7 8 9 10
T dN
/d p
310
410
510
610
KineTdN’/dp ESD
T dN’/dp
ESDTdN/dp
± p
(GeV/c) T
p 0 1 2 3 4 50
0.2
0.4
0.6
0.8
1
1.2 purity efficiency
(GeV/c) T
p 0 1 2 3 4 50
0.2
0.4
0.6
0.8
1
1.2 purity efficiency
(GeV/c) T
p 0 1 2 3 4 50
0.2
0.4
0.6
0.8
1
1.2 purity efficiency
Figure 5.2. Top row: Generated and reconstructed dN/dpT spectra for pions, kaons and protons in the ALICE central barrel detector (η < 0.9). Lower row: Efficiency and purity as a function pT for pions, kaons and protons. No particle identification is involved, the cuts applied to the data are discussed in sec.5.2.
Using the Monte Carlo information from the simulations, both efficiency and purity can be studied for different particles separately, showing their different pT dependence. Fig.5.2 shows the generated and reconstructed pT spectra of pions, kaons and protons produced at η < 0.9 (top row), and their reconstruction effi ciency and purity (lower row).
5.1 Efficiency study 79
Since the aim of the present analysis is the characterization of elliptic flow of unidentified charged particles, the efficiency corrections are calculated from the overall spectra of reconstructed tracks, without involving the effects of particle iden tification. Not knowing the particle composition and the shape of the dN/dpT dis tribution for each particle, a way to determine the systematic error of this procedure is by looking at the differences between different predictions for heavy ion events at LHC.
(GeV/c) T
p 0 1 2 3 4 5 6 7 8 9 10
ef fic
ie nc
y
0
0.2
0.4
0.6
0.8
1
GeVSim Hijing GeVSimHijing
(GeV/c) T
p 0 1 2 3 4 5 6 7 8 9 10
pu rit
y
0
0.2
0.4
0.6
0.8
1
GeVSim Hijing GeVSimHijing
Figure 5.3. Overall efficiency (a) and purity (b) in the ALICE central barrel detector (η < 0.9) as a function pT for two sets of simulations, Hijing and GeVSim, the absolute value of the difference is shown as well. The same set of cuts is applied to both the samples (see sec.5.2).
The simulations presented in this chapter are produced from two different sce narios, the particle ratio of the GeVSim events (tab.5.2) are calculated with an im plementation of the thermal model for particle production (Thermus [144]), while the particle composition of the Hijing events (tab.5.4) is determined by its internal implementation of QCD interactions and hadronization processes [104].
Figure 5.3 shows the overall efficiency and purity as a function pT for the two different inputs (Thermus and Hijing). The difference between the two gives an estimate of the systematic error for the a priori unknown particle ratio. However, the difference is very small due to the fact that in both models the majority of particles are pions, and the amount of protons and kaons is only of the order of 10% (this prediction is supported by experimental data from RHIC, see for instance [45]).
The figure also shows that the reconstruction efficiency rapidly saturates to its maximum (effmax ∼ 90%) for pT & 1 GeV/c. This is determined by the dominating contribution from pions and a similar behaviour is also observed for protons (see fig.5.2), while the efficiency of kaons reconstruction saturates only at pT ≃ 2 GeV/c due to decays 4.
4Part of the K mesons with low momentum decays before reaching the TPC (the mean lifetime of the charged kaon is τK± ≃ 1.238× 10−8 sec, giving a cτK± ≃ 3.7 m). At large momentum (for
80 Simulations & Results
The systematic error on the efficiency due the unknown particle ratios is cal culated from the difference between the efficiencies of the two sets of simulations (Hijing and GeVSim). See sec.5.2.2 for the details.
5.1.3 Multiplicity (in)dependence Figure 5.4 shows that, in agreement with the ALICE PPR [71], the efficiency is almost constant with respect to the particle multiplicity. However, comparing the efficiency of peripheral, midcentral and central events (i.e. from dN
dη ∼ 100 to
∼ 2000 tracks per unit rapidity, according to the extrapolation given in sec.1.3) a systematic decrease in the reconstruction efficiency as a function of pT can be observed over all the range of interest of the present analysis (0.1 < pT < 10 GeV/c).
(GeV/c) T
p 0 1 2 3 4 5 6 7 8 9 10
M C
/N ’
ES D
N ’
0
0.2
0.4
0.6
0.8
1
~ 2000 (c.c.0+1)ηdN/d ~ 500 (c.c.3)ηdN/d ~ 100 (c.c.4+5)ηdN/d
 low
η dN/d highη dN/deff∆
Figure 5.4. Track reconstruction efficiency as a function of pT at three multiplicities dN dη
∣∣∣ η<0.5
≃ 100, 500, 2000. The absolute value of the difference between the highest and the lowest multiplicity samples is shown as well (data from the GeVSim simulations, see tab.5.1 for the details).
In the present analysis the efficiency is calculated as an average over all pro duced events. Therefore, the difference in efficiency between the lowest and highest multiplicity events (which is of the order of a few %) is added to the systematic error on the calculated efficiency. For more details, see sec.5.2.2.
5.1.4 Main Vertex The nominal η acceptance of the TPC is from −0.9 to 0.9 for collisions at z = 0 (center of the TPC). However, for the geometrical arrangement of the beam crossing, the collision can happen anywhere in an ‘interaction diamond’, i.e. in
pT > mK±/c) the kaon lifetime becomes significantly Lorentz dilated.
5.1 Efficiency study 81
a lenght of about 30 cm along the z axis (see sec.2.3.1). As the location of the pri mary vertex changes on an event base, the η acceptance o the ALICE TPC is also different for each event. An event at the edge of the interaction region, with main vertex at z = 15 cm, will see the TPC with an acceptance −0.93 . η . 0.85 (see sec.2.1.2).
η 1 0.5 0 0.5 1
ef fic
ie nc
y
0
0.2
0.4
0.6
0.8
1
Figure 5.5. Reconstruction efficiency as a function of the pseudorapidity for a fixed main vertex position (empty markers) and for a vertex position gaussianly distributed in the ‘in teraction diamond’ (−15 . z . 15 cm). The plot is obtained from 2 sets of 1000 fully reconstructed GevSim events, generated with a flat dN/dpTdη spectra.
Considering events with a Gaussian distribution of the main vertex position, the overall efficiency rapidly drops above η ≃ 0.85, as we see in fig.5.5. The figure shows the efficiency of primary particles reconstruction, as a function of the pseu dorapidity, for two different sets of simulations, produced with fixed and variable primary vertex position. The first set has fixed main vertex position at z = 0, the second has the main vertex randomly located along z (in a Gaussian distribution with σz = 5.3 cm).
In a realistic case (the latter), a symmetric η cut should be used in a region of flat efficiency (e.g. η < 0.85), to avoid introducing an artificial asymmetry in the event. However, the simulations presented in this chapter have been produced with a fixed primary vertex position at z = 0, and moreover the η dependence of v2 is parametrized flat on the full η range (see chap.1.3).
Therefore, in the present analysis, only a sharp cut at η < 0.9 is applied (to include the widest TPC range with a ‘uniform’ tracking efficiency). The measure ments are averaged over all the detectable pseudorapidity interval, and the main vertex position has not been taken into account for the calculation of the systematic error.
82 Simulations & Results
5.2 Cut optimization Cuts are studied with respect to the reconstruction efficiency and the purity of the sample, using as input the full set of GeVSim and Hijing simulations (see sec.5.3 and 5.4).
The aim of the cuts is to isolate primary particles, to allow a clean reconstruc tion of the differential shape of v2(pT ) without any further correction and without loosing too much statistic, in order to obtain a good balance between statistical and systematic error.
Detector signal
Our main interest is in measuring elliptic flow of unidentified charged tracks re constructed in the ALICE central barrel detectors. Therefore, the first cut con sist in selecting tracks reconstructed in the TPC 5, in the pseudorapidity range of full coverage (η < 0.9). In addition we want the track fit to be propagated (at least) to the ITS, so that the extrapolation to the primary vertex becomes more re liable (δV TX . 100µm, see below). Fortunately the efficiency drops less than 5% when requiring the ITS signal in association to the TPC (see chap.5 of the ALICE PPR [25]).
Since no particle identification is needed for the measurement of unidentified particles flow, the outermost detectors of the central barrel (TRD and TOF) are not part of the present analysis (however, if a particle reaches them, the Kalman filter includes them in the track fit improving the precision, see sec.2.3). Due to the larger distance from the interaction and the smaller θ coverage, requiring a TRD and TOF signal would introduce a strong cut in pT and η and the overall efficiency would be dramatically reduced from 80 − 90% to less than 60% (see chap.5 on the ALICE PPR [25]).
Constrainability condition, fit χ2 and number of fit points/max (TPC) A first selection of primary tracks is realized by the constrainability condition, where a track is defined ‘constrainable’ if the main vertex of the collision can be included as a fit point.
In the reconstruction code, the constrainability of tracks is tested at the third pass of the fit procedure (see 2.3), when the track is refitted from its outermost point inward. The main vertex is fed to the Kalman filter as an additional space point and, if the fit succeeds with an ‘acceptable’ 6 χ2, the track is labeled as constrainable and the constrained parameters of the track are updated with the last refit. The
5In the present analysis, only full tracks are considered (for which, by definition, the tracking algorithm starts from the TPC, see sec.2.3). Track segments reconstructed by other detector (e.g. ITS ‘tracklets’) are not taken into account.
6From the χ2 distribution (fig.5.6(a)) we see that is ‘acceptable’ a value of χ2 < 77 (i.e. χ . 8.8σ′, where σ′ is the uncertainty on the primary vertex position).
5.2 Cut optimization 83
2χ 0 10 20 30 40 50 60 70 80 90
2 χ dN
/d
1 10
210
310
410
510
610
710
810
MCN’ ESDN ESDN’
2χ 0 5 10 15 20 25 30 35 40 45 500
0.2
0.4
0.6
0.8
1
1.2 )ESD/NESDPurity (N’
)MC/N’ESDEfficiency (N’ ESD/dNESDdN’
Figure 5.6. (a) dN/dχ2 of all constrainable AliESDtracks (with TPC + ITS signal) and all constrainable primaries. The full histogram represent the differential distribution, while the upper lines represent the integrated number of track for any given cut on the fit χ2. The number of simulated primaries is shown as well. (b) Integrated efficiency and purity as a function of the cut on the fit χ2. The total purity is not very sensitive to the applied cut, instead the ratio primaries/all track reconstructed at each χ2, provides a more sensitive estimator.
constrainability of the track is a necessary condition to ensure that an extrapolation of the track’s parameters exists in the proximity of the interaction point.
The constrainability condition alone is very efficient in removing secondary tracks, while a more strict requirement on the fit χ2 does not considerably improves the purity of the selection (see fig.5.6(b)).
However, the ratio between constrainable tracks and constrainable primaries as a function of the fit χ2 (the ratio dN ′ESD/dNESD at each bin of the dN/dχ2 dis tribution), becomes 50% at χ2 = 20, i.e. less than half of the constrainable tracks reconstructed with χ2 ≥ 20 actually comes from primary particles. Therefore, in the present analysis, the cut χ2 < 20 has been applied (a wider cut will increase the background more than the signal).
Due the low sensitivity of the efficiency with respect to this cut, the fit χ2 has not been included in the calculation of the systematic error.
A onetoone comparison between reconstructed and simulated particles shows a non negligible contribution of ‘double counted’ (or splitted) tracks. The experi mental precision of the tracking device and the accuracy of the reconstruction algo rithm can cause a single particle going trough the TPC to be reconstructed twice, producing two different track candidates in the AliESD.
This applies both to curved (low momentum) tracks at η ∼ 0, which spiral back toward the primary vertex, and to straight (high momentum) tracks flying across different detector elements which are not perfectly aligned.
A strategy developed at STAR [129] to suppress this effect is to apply a cut over
84 Simulations & Results
fit/max TPC 0 0.2 0.4 0.6 0.8 1
) TP
C
m ax
N fitN
dN /d
(
1 10
210
310
410
510
610
710
810 MCN’ ESDN ESDN’
fit/max TPC 0 0.2 0.4 0.6 0.8 10
0.2
0.4
0.6
0.8
1
1.2 )ESD/NESDPurity (N’
)MC/N’ESDEfficiency (N’
Figure 5.7. (a) dN/d(Nfit/Nmax) of all constrainable AliESDtracks (with TPC + ITS signal) and all constrainable primaries. The full histogram represent the differential distribution, while the upper lines represent the integrated number of track for any given cut onNfit/Nmax in the TPC. The number of simulated primaries is shown as well. (b) Efficiency and purity as a function of the cut on Nfit/Nmax in the TPC.
the number of fit points (from which the track candidate is interpolated) normalized by the number of clusters that the track could produce in the detector. The actual number of space points used for the track fit Nfit is stored in the AliESDtrack object (see sec.2.3) and, in addition, the reconstruction algorithm uses a helix parametriza tion of the track to estimate the number of clusters Nmax that a particle flying along the reconstructed trajectory would give in each detector element. This is particu larly important in the TPC, where the number of sensitive elements is large and the fit of each track can include up to 160 space points (see 2.1.2).
no Cuts TPC+ITS Constr. <202χ >0.6TPC max fit DCA<0.5mmt
0.5
0.6
0.7
0.8
0.9
1
Purity GeVSim Purity Hijing Efficiency GeVSim Efficiency Hijing
Figure 5.8. Total efficiency and purity (in the range 0.1 < pT < 10 GeV/c, η < 0.9) for all the applied cuts, showing the results for both the GeVSim and the Hijing simulations (see sec.5.3 and 5.4 respectively).
5.2 Cut optimization 85
A cut on the ratio Nfit/Nmax > 0.6 in the TPC helps in removing the contribu tions from double counted tracks and slightly improves the purity by ∼ 1% (see fig.5.8). However, both the efficiency and the purity show a very flat dependence with respect to this cut (see fig.5.7); even a 10− 20% systematic error on the value Nfit/Nmax has a negligible effect on the calculated efficiency. Therefore this cut has not been included in the calculation of the systematic error (see sec.5.2.2).
Fig.5.8 summarizes the cuts applied in the present analysis, showing the inte grated efficiency and the purity of the selection passing all cuts, separately for the GeVSim and the Hijing simulations. On top of the basic set of cuts (tracks with both TPC and ITS signal, with at least 60% of the TPC clusters included in the fit and a constrained χ2 < 20), further cuts can be applied to enhance the purity of the track candidates, e.g. a cut on the distance of closest approach to the main vertex (see below).
Transverse DCA
The excellent resolution of the ITS (see sec.2.1.1) allows an extrapolation of the track to the main vertex with a precision of the order of 100µm in the y − x plane, depending on the momentum of the track and on the number of reconstructed clus ters, and a bit worse in the z direction 7.
The extrapolated distance between the fitted track and the event’s main vertex is called Distance of Closest Approach (DCA). A Gaussian fit of the DCA distribution of primaries in the transverse plane gives σtDCA = 160 µm (see fig.5.9), while a fit on the z direction gives σzDCA = 430 µm. Due to their intrinsically different precision they are usually considered separately, and the much better resolution of the DCA in the transverse plane suggest the latter as a good parameter for selecting primary particles.
Figure 5.9(a) shows the transverse DCA distribution for all constrainable 8 tracks and for constrainable primaries, together with a half Gaussian fit of the latter (with fixed peak position at 0). The integrated efficiency and purity as a function of the tDCA cut are shown in fig.5.9(b), the purity of the selection is not very sensitive to the applied cut, while the efficiency rapidly drops for a tDCA cut smaller than few hundreds µm.
In the present analysis, a cut at 500µm (∼ 3σtDCA ) has been applied. Together with the other cuts, this condition results in an integrated purity of primaries higher than 95%. The detailed pT dependence of the purity for both the GeVSim and the Hijing sample is shown in fig.5.3(b).
The systematic uncertainty connected to this cut is calculated by assuming an inprecision of ±100 µm on the reconstructed tDCA, as if the tDCA distribution obtained from the simulation does not correctly reproduce the one measured in the
7See sec.2.3 and the ALICE PPR [25] (at section 5.1.6.3). 8The main vertex is included in the fit.
86 Simulations & Results
DCA (cm)t 0 0.05 0.1 0.15 0.2 0.25 0.3
D CA
t dN
/d
1 10
210
310
410
510
610
710
810 MCN’ ESDN ESDN’
= 0.016 cm)σfit (
DCA (cm)t 0 0.02 0.04 0.06 0.08 0.10
0.2
0.4
0.6
0.8
1
1.2 )ESD/NESDPurity (N’
)MC/N’ESDEfficiency (N’
Figure 5.9. (a) dN/dtDCA of all constrainable AliESDtracks (with TPC + ITS signal, χ2 > 20 and Nfit/Nmax > 0.6) constrainable and primaries. The full histogram represent the differential distribution, while the upper lines represent the integrated number of track for any given cut on the transverse DCA (the number of simulated primaries is shown as well). The experimental resolution on the measured tDCA is obtained through a Gaussian fit of the transverse DCA distribution of reconstructed primary particles (σtDCA ≃ 160µm). (b) Efficiency and purity of primaries with respect to the transverse DCA cut.
real experiment. The reconstruction efficiency has been calculated separately for two choices of the tDCA cut (tDCA < 500 ± 100 µm) and the difference is taken as an estimate of the systematic error (see sec.5.2.2).
Low pT cut and extrapolation
The magnetic field in the ALICE central barrel introduces a low pT cut in the detec tor acceptance. Low pT particles (pT . 100 MeV/c for pions) are curved enough to barely reach the TPC, and therefore the track reconstruction efficiency becomes almost zero in the pT region below 100 MeV/c (see sec.2.1).
The strategy applied in the present analysis is to limit the measurements to the pT range above 100 MeV/c, and after having measured the particle yield and ap plied the efficiency corrections, extrapolate the measurements down to pT = 0.
The content of the first bin of the dN/dpT histogram is estimated as a fraction of the total integral of the reconstructed spectrum: assuming the cumulative dN/dpT spectrum of charged ‘stable’ hadrons is known, is possible to calculate the ratio between the number of particle produced with pT < 100 MeV/c (N1) and the number of particle produced between pT = 0.1 and 10 GeV/c (Na):
nlow = N1 Na
=
∫ 100MeV/c 0
dN dpT
dpT∫ 10GeV/c 100MeV/c
dN dpT
dpT . (5.6)
In the present analysis, the ratio nlow has been calculated exactly from the (known) input spectra, the values are 0.0406 for GeVSim (with input spectra given by eq.5.10)
5.2 Cut optimization 87
and 0.0436 for Hijing. The difference between the two values is used to estimate the systematic error of the method due to the ‘a priori’ unknown shape of the observable dN/dpT spectrum (see sec.5.2.2).
For a small uncertainty on the ratio nlow, the statistical error associated to this procedure is comparable to the error of a counting experiment (σN =
√ N ), and
therefore, to the statistical error of any other pT bin:
σN1 = nlow × σNa ≃ N1 Na
× √ Na =
N1√ Na
< N1√ N1
= √ N1 . (5.7)
In a real experiment, the ratio nlow should be obtained from an accurate fit of the reconstructed spectra (the fit can be done just once, assuming the centrality class dependence of the spectra is negligible) and the calculated nlow could be used as a reconstruction parameter for recovering the dN/dpT spectrum from the observed data corrected by the efficiency.
The extrapolation of the dN/dpT spectrum could be done using a Levy distribu tion, as suggested by some recent studies at RHIC [41]:
1
pT
dN
dpT = A · 1
(1 + pT/(n · T ))n . (5.8)
Or, more precisely, using a weighted sum of three Levy distributions (for π, K and p) with mT in place of pT , since the observed dN/dpT spectrum is actually the sum of the spectra of all charged stable particles.
A slightly modified version of eq.5.8, incorporating the particle mass depen dence, has been used to generate all pT spectra of the GeVSim events (see sec.5.3).
The extrapolations of the particle spectrum and the elliptic flow to low pT (see sec.5.3.3) are an essential step to calculate the integrated v2. The result of the ex trapolations, for both the GeVSim and the Hijing samples, are shown in sec.5.3.4 and 5.4.3 respectively.
5.2.1 Final corrections One of the goals of the present analysis is to measure the integrated elliptic flow 〈v2〉 at midrapidity. This is achieved by taking the average cos [2 (φi −Ψ)] in the kinematic range covered by the detector. The track reconstruction efficiency is not constant as a function of η, pT and φ, therefore some corrections are needed to provide a measurement which is not biased by the detector itself.
Efficiency corrections are applied under the assumption that the total momentum spectra of reconstructed tracks d3N/d~p can be factorized into the three familiar components η, pT and φ. This assumption is not completely true, e.g. straight (high pT ) track can easily escape through a crack between two segments in the TPC without being detected at all, while more curved tracks (lower pT ) could spiral back into the sensitive volume of the TPC and release enough hits to be reconstructed. However, a full 3D study of the efficiency would require a much higher statistic than the one available.
88 Simulations & Results
(GeV/c) T
p 0 1 2 3 4 5
T dN
/d p
510
610
710
810
MC’TdN/dp ESD’TdN/dp ESDTdN/dp
(GeV/c) T
p 0 1 2 3 4 50
0.2
0.4
0.6
0.8
1
1.2 efficiency purity
Figure 5.10. (a) Simulated and reconstructed dN/dpT spectra of the full set of simulations (Hijing + GeVSim). (b) Final efficiency and purity correction factors as a function of pT , calculated over the full set of simulations.
• Since the pseudorapidity dependence of v2 is assumed to be flat, η correc tions are not taken into account but just an acceptance cut is applied (see the previous sections).
• The geometrical arrangements and the magnetic field in the central barrel in troduce a non flat pT dependence of the efficiency. This is particularly impor tant in the low pT region (pT < 1 GeV/c) where, due to the exponential shape of the dN/dpT distribution, most of the particles are produced, and where the differential shape of v2(pT ) is definitely not flat (see chap.1). Therefore, the pT dependence of the efficiency needs to be taken into account when calculat ing the integrated v2 (i.e. the integral of v2(pT ) convoluted with the corrected dN/dpT spectra, see sec.3.2.5).
• The azimuthal segmentation of the active elements in the main tracking device (the TPC, see sec.2.1.2) causes a periodic drop in the azimuthal dependence of the efficiency, at φ = n× 2π/18 (see fig.3.2(a)). As described in sec.3.2.4, the implementation of the flow analysis code already incorporates a correction of the dN/dφ distribution, i.e. φ weights are used for the determination of the ~Q vector.
The efficiency corrections as a function of pT are calculated by means of the full set of simulations produced for the present analysis (Hijing and GeVSim with Ther mus), using the ‘optimal’ set of cuts discussed above. This is done to incorporate some systematic effect due to the ‘a priori’ unknown particle composition as dis cussed in sec.5.1.2: the difference between the two sets is added to the systematic error (see sec.5.2.2).
The pT dependence of the reconstruction efficiency (for the two cases, Hijing and GeVSim) are shown in fig.5.3. The combined result (the efficiency correction
5.2 Cut optimization 89
factor that is used in the analysis) is shown in fig.5.10(b). The integrated efficiency (under the applied set of cuts) for particles between 0.1 and 10 GeV/c is 〈eff〉 = 67%. The integrated purity is 〈pur〉 = 95.7%.
As described in the previous section, the corrections are applied for pT > 100 MeV/c, while the low pT part of the spectrum (pT < 100 MeV/c) is extrapolated as a fraction of the observed dN/dpT distribution corrected by the efficiency.
Considering also the first pT bin, the total reconstruction efficiency is 〈efftot〉 = N ′MC/N
′ ESD = 64.3% (this number is used to scale up the reconstructed multiplicity
for the plot of v2 vs dN/dη, see fig.5.24 and 5.29).
5.2.2 Systematic Error The systematic uncertainty on the efficiency (as a function of pT ) is calculated by varying the most sensitive observables pointed out in section 5.1 (i.e. the particle composition and the multiplicity dependence), and the applied cuts (limiting the discussion to the transverse DCA only, see sec.5.2).
GeV T
p 0 1 2 3 4 5
% ef
f σ
0
2
4
6
8
10 GeVSimHijingp.con.σ
η100p/η 2000p/mult.σ mµm600µ 400DCAtσ
DCAt 2σ + mult.
2σ + p.con. 2σ
(0  100 MeV/c) lown
σ
Figure 5.11. Systematic error on the efficiency, calculated from the uncertainty on the particle composition (sec.5.1.2), the difference in particle multiplicity (sec.5.1.3), and the applied cut on the tDCA.
Each contribution is obtained from the absolute difference in the calculated effi ciency between two (extreme) cases: the systematic uncertainty due to the unknown particle composition is the difference between the two simulated inputs (sec.5.1.2), the uncertainty due to the multiplicity dependence is the difference between the lowest and highest multiplicity events (sec.5.1.3)), the uncertainty due to the ap plied DCA cut is the difference between a DCA cut at 400 µm and 600 µm (±100 µm around the chosen value of 500 µm). The total systematic uncertainty σeff is calculated by adding quadratically the three contributions (see fig.5.11):
σeff = √ σ2mult. + σ
2 p.con. + σ
2 tDCA. (5.9)
90 Simulations & Results
The systematic error on the extrapolation of dN/dpT between 0 and 100 MeV/c is calculated as the difference in the extrapolation parameter nlow between the two sets of simulations (see above): σnlow/nlow ≃ 7%. It is comparable with the sy stematic error at low pT on the calculated efficiency (for comparison, the value is shown as the first pT bin of fig.5.11).
The systematic uncertainty on the reconstruction efficiency σeff is used to esti mate the systematic error on the measured v2 (see sec.5.3.5).
5.3 Genuine flow reconstruction (GeVSim) This section presents the results of the event plane analysis performed over three different sets of fully reconstructed ALICE events simulated with GeVSim, each set based on a different extrapolation of the centrality dependence of elliptic flow (as presented in sec.1.3).
5.3.1 Simulations details Events are produced in six centrality classes, with particle multiplicity and width listed in tab.5.1.
The main vertex position has been fixed at (x, y, z) = (0, 0, 0) (see sec.5.1.4). The magnetic field, measured at the center of the ALICE solenoid, is ~B = 0.4 Tesla.
Table 5.1. Summary table of the 3 sets of GeVSim simulations. Per each centrality class (c.c.), the input values of v2, the particle multiplicity (and width), and the number of pro duced events are listed.
c.c. dNch dη
± σ 〈vLDL2 〉 Nevts ⟨ vhydro2
⟩ Nevts
⟨ vhydro22
⟩ Nevts
0 1922± 300 0 1k 0 0 0 0 1 1619± 290 5.15 1k 2.35 2.5k 3.5 1.7k 2 1013± 200 11.4 0.4k 5.87 1k 8.81 0.5k 3 617± 160 12.95 0.5k 8.04 1k 12.06 0.5k 4 213± 90 8.8 2k 9.55 2k 14.33 0.8k 5 42± 30 2.75 16k 7.63 7.5k 11.44 6.7k
Particle composition
The particle composition has been calculated using Thermus, a ROOT implementa tion of the thermal model for particle productions [144]. The chemical freezeout temperature has been set to Tch = 170 MeV/c and the baryon chemical potential to
5.3 Genuine flow reconstruction (GeVSim) 91
Table 5.2. Total and relative relative particle abundances calculated with Thermus (input of the GeVSim simulations).
p.type (%/tot) P.Id. m GeV/c2 %/tot %/‘stable’ h±
pions π+ 0.13957 22.5398 39.36 72.2% π− 22.5452 39.37
π0 0.13498 27.0965 0
kaons K+ 0.49368 4.05139 7.07 16% K− 4.04341 7.06
K0S 0.49765 3.9437 0
K0L 3.9437 0
nucleons p 0.93827 2.05554 3.59 8.2% p¯ 2.03286 3.55
n 0.939565 2.05242 0
n¯ 2.02883 0
hyperons Λ0 , Λ¯0 1.11568 1.937515 0 3.6% Σ+ , Σ¯− 1.18937 0.528322 0
Σ− , Σ¯+ 1.19745 0.515834 0
Ξ− , Ξ¯+ 1.3217 0.311066 0
Ξ0 , Ξ¯0 1.3148 0.315868 0
Ω− , Ω¯+ 1.6724 0.057987 0
heavy mesons φ0 1.101945 0.557406 0
µB = 10 MeV/c (the calculation was done for hadrons only). The resulting particle abundances are listed in tab.5.2.
The relative ratios of the three type of charged primary hadrons considered in the analysis are 78.7% π±, 14.1% K±, 7.1% p and p¯. All events of this set of simulations have been produced with the same particle ratios and input spectra, while the total multiplicity and the magnitude of elliptic flow are assigned with respect to the centrality class.
pT and η spectra
In order to reproduce a realistic particle spectrum in the momentum range of interest (0 < pT < 10 GeV/c), the simulated d3N/d~p distribution (expressed in the three familiar components pT , η and φ) has been customized with a user defined formula, similar to the Levy distribution in mT [41], convoluted with a flat distribution in rapidity y (which leads to a non flat pseudorapidity distribution) and an azimuthal distribution (with flow) generated by GeVSim.
92 Simulations & Results
(GeV/c) T
p 0 1 2 3 4 5 6 7 8 9 10
T /d
p i
dN
0 1 2 3 4 5 6 7 8 9 10
±pi ±K
pp ,
η 1.5 1 0.5 0 0.5 1 1.5
η /d i
dN
1.5 1 0.5 0 0.5 1 1.5
±pi ±K
pp ,
Figure 5.12. Input spectra, dN/dpT (a) and dN/dη (b), of the GeVSim simulations for the three stable charged hadrons (π±, K±, p and p¯) with the relative ratios given in tab.5.2.
The input dN/dpT spectrum is given by the equation:
dN
dpT = A · pT
(1 + (mT −m)/(nM · T0))nM , (5.10)
where mT = √ m2 + p2T is the transverse mass, T0 is the slope parameter (tempera
ture) and nM = n/mα is the modified slope variation parameter. This phenomeno logical term introduces a weak dependence of the slope variation on the particle mass, so that the the tail of the dNi/dpT distribution becomes particle species de pendent (through mi) and a single slope variation parameter n can be used to repro duce the spectra of all particles. The parameters of eq.5.10 are tuned by a fit of the generated spectra of pions, kaons and protons produced by Hijing. The obtained values are T0 = 125 MeV, n = 5 and α = 0.11.
Fig.5.13 shows a fit of the dN/dpT spectra generated by Hijing (the fit is limited to the interval 0 < pT < 3 GeV/c). The fit works quite well at low pT , but it fails to reproduce the correct slope of the tail of the distribution for pT & 4 − 5 GeV/c. However, fig.5.3 shows that the reconstruction is not very sensitive to the shape of the input spectra, especially at high pT : the difference in efficiency (purity) between the two inputs (Hijing and GeVSim), due to the effects of bin migration and particle composition, is smaller than a few %.
To save computing time, the range of the simulations has been limited to the central pseudorapidity interval, around the coverage of the ALICE central barrel detectors (−1.3 . η . 1.3).
Elliptic flow v2
Table 5.1 summarizes the simulated values of 〈v2〉 for the three different parametri zations (named LDL, hydro and hydro2).
5.3 Genuine flow reconstruction (GeVSim) 93
(GeV/c) T
p 0 0.5 1 1.5 2 2.5 3 3.5 4
T /d
p i
dN ±pi ±
K p p ,
Figure 5.13. Fit of the dN/dpT spectra generated by Hijing, using eq.5.10 (the fit is limited to the interval 0 < pT < 3 GeV/c). The y axis is in arbitrary units, and the relative height of the spectra is not proportional to the generated particle ratios.
The differential shape of v2(pT ) is linearly rising, with saturation value at pT = 2 GeV/c. Integrating over the given input spectra of π±, K±, p and p¯ (see also sec.4.2) the saturation values of v2 are given by vsat2 = ki→s 〈v2〉, with ki→s = 3.85.
The number of simulated events is chosen so that v22 × dN/dη ×Nevts is about constant 9, this should give roughly the same statistical error on the measured v2 in each class. The ‘constant’ value (v22 × dN/dη × Nevts ∼ 3000) is determined by the available resources and CPU time. The resulting statistical error is comparable to the systematic uncertainty (see sec.5.3.5).
5.3.2 Event plane determination and resolution study The width of the reconstructed event plane with respect to the true one is described by the resolution parameter (see sec.3.2.1).
As an example, fig.5.14 shows the ∆Ψ distribution (modulo π) for three cen trality classes (central, midcentral and peripheral events) of the hydro simulations (see tab.5.1). In the upper part of the figure the difference between the reconstructed event plane and the simulated reaction plane is plotted (∆Ψtrue = Ψtrue − Ψobs2 ), in the lower part the difference between η subevents (∆Ψηsub = ΨA2 −ΨB2 , with A and B equal multiplicity η subevents 10).
The width of the ∆Ψtrue distribution is not very sensitive to the applied cuts, becoming slightly worse if no cuts are applied. However, in the latter case, the observed ∆Ψsub2 distributions are narrower due to azimuthal correlations between secondary particles (such as decay products) and this can lead to an overestimate of the event plane resolution (see below).
9Due to some failed simulations, this is not always the case (see tab.5.1). 10In absence of nonflow effects the result does not depends on the choice of the subevents.
94 Simulations & Results
)pi (mod. trueΨ  obs2Ψ 8060 4020 0 20 40 60 800
100
200
300
400
500
Kine ESD no cuts ESD constr.
dca cuttESD
hydro c.c.1
)pi (mod. trueΨ  obs2Ψ 8060 4020 0 20 40 60 800
50 100 150 200 250 300 350
Kine ESD no cuts ESD constr.
dca cuttESD
hydro c.c.3
)pi (mod. trueΨ  obs2Ψ 8060 4020 0 20 40 60 800
200 400 600 800
1000 1200 1400
Kine ESD no cuts ESD constr.
dca cuttESD
hydro c.c.5
)pi (mod. subη2Ψ ∆ 806040 20 0 20 40 60 800
50
100
150
200
250
300 Kine ESD no cuts ESD constr.
dca cuttESD
)pi (mod. subη2Ψ ∆ 806040 20 0 20 40 60 800
50
100
150
200
250
Kine ESD no cuts ESD constr.
dca cuttESD
)pi (mod. subη2Ψ ∆ 806040 20 0 20 40 60 800
100 200 300 400 500 600 700 800 900
Kine ESD no cuts ESD constr.
dca cuttESD
Figure 5.14. Upper row: ∆Ψtrue distributions for centrality class 1, 3 and 5 of the hydro simulations (see tab.5.1), using different track selections. Lower row: ∆Ψη−sub2 distribu tions. The full histogram represent the ∆Ψ distribution calculated from the KineTree of all generated primary particles.
Using the iterative procedure implemented in the analysis code (see sec.3.2.1), the event plane resolution is extrapolated from the observed ∆Ψsub2 . The iteration is based on eq.3.8, here rewritten for n = 2:
res2 = ⟨ cos [ 2(Ψobs2 −Ψtrue)
]⟩ =
√ π
2 √ 2 χ2e
− χ2 2 4 × [I0(χ22/4) + I1(χ22/4)] , (5.11)
where In are modified Bessel functions of order n, and χ2 = v2/σ, σ = √
1 2M
〈w2〉
〈w〉2 .
For unitary weights (wi = 1), χ2 = v2/ √ 2M .
When the extrapolation is done using only primary particles from the KineTree, the result is in perfect agreement with the ‘true’ resolution Ψtrue −Ψobs2 .
The optimization of the cuts for the reconstruction of the event plane is done by comparing the observed event plane resolution with the ‘ideal’ one, calculated by feeding the input values of M ′ and v′2 into eq.5.11. Note that M ′ is the multiplicity used for the calculation of the event plane, i.e. all reconstructible primary particles in the ALICE central barrel (M ′ = 1.8×dN ′/dη), v′2 is the integrated elliptic flow of all primary π±, K±, p and p¯. Figure 5.15 shows the observed event plane resolution (calculated from ∆Ψη−sub2 ) with respect to the centrality class for the three sets of GeVSim events, using different track selections.
The observed resolution becomes lower using more strict cuts because of the
5.3 Genuine flow reconstruction (GeVSim) 95
centrality class 1 2 3 4 5
)]> tr
ue Ψ
 2 Ψ
< co
s[2 (
0.2
0.4
0.6
0.8
1
true (Kine) ESD no cuts ESD constr.
dca cuttESD
LDL
centrality class 1 2 3 4 5
)]> tr
ue Ψ
 2 Ψ
< co
s[2 (
0.2
0.4
0.6
0.8
1
true (Kine) ESD no cuts ESD constr.
dca cuttESD
hydro
centrality class 1 2 3 4 5
)]> tr
ue Ψ
 2 Ψ
< co
s[2 (
0.2
0.4
0.6
0.8
1
true (Kine) ESD no cuts ESD constr.
dca cuttESD
hydro2
Figure 5.15. Observed event plane resolution with respect to the centrality class (see tab.5.1 for the simulations details), using different track selections. The ‘ideal’ event plane resolu tion are shown as well (obtained from the generated distribution of cos (2 [Ψtrue −Ψobs2 ])).
centrality class 1 2 3 4 5
)]>
tr ue
Ψ 
 w
gt Tp 2
Ψ <
co s[2
(
0.2
0.4
0.6
0.8
1
true (Kine) ESD no cuts ESD constr.
dca cuttESD
LDL
centrality class 1 2 3 4 5
)]>
tr ue
Ψ 
 w
gt Tp 2
Ψ <
co s[2
(
0.2
0.4
0.6
0.8
1
true (Kine) ESD no cuts ESD constr.
dca cuttESD
hydro
centrality class 1 2 3 4 5
)]>
tr ue
Ψ 
 w
gt Tp 2
Ψ <
co s[2
(
0.2
0.4
0.6
0.8
1
true (Kine) ESD no cuts ESD constr.
dca cuttESD
hydro2
Figure 5.16. Observed event plane resolution, calculated using pT weights in the defini tion of ~Q, versus centrality class (see tab.5.1 for the simulations details). The plot shows the results using different sets of cuts on the AliESD, the ‘ideal’ values of the event plane resolution are shown as well (from cos (2 [Ψtrue −Ψobs2 ])). reduced statistic (lowerM ), however if no condition is applied to exclude secondary tracks, the observed resolution can be higher than the true one. The effect is more visible in peripheral (low multiplicity) events, where the resolution is far from its saturation (see bin 5 of fig.5.15(a) and (b)). The constrainability condition alone (for TPC + ITS tracks) is enough to obtain a resolution very close to the ‘ideal’ values.
A better event plane resolution is achieved by using pT weights in the calcu lation of the ~Q vector (see sec.3.2.3). The use of pT weights, in fact, reduces the contribution of tracks at low pT , where the purity is lower (see sec.5.1). For the same reason, the resolution becomes less sensitive to the applied cuts (see fig.5.16).
From the above study we can conclude that the best resolution is achieved by selecting constrainable TPC + ITS tracks and using pT weights.
96 Simulations & Results
c.c.5
c.c.1
c.c.4
c.c.2
c.c.3
M / M
∆Res
obs
0.92
0.94
0.96
0.98
1
1.02
1.04
1.06
0.8 0.9 1 1.1 1.2
Figure 5.17. Effects on the fullevent resolution (∆Res = resobs2 /restrue2 ) of an uncorrectly reconstructed multiplicity, for five different combinations of v2 and M (from the hydro parametrization, see tab.5.1).
From the expression of the ~Q vector (eq.3.3) we may argue that the presence of randomly distributed secondaries and double counted tracks does not affect the direction of the reconstructed event plane. The two averages:
〈cos (nφi)〉 , 〈sin (nφi)〉 (5.12) lead to the same central values either by adding randomly distributed φ angles (〈cos (nφrnd)〉 ∼ 0), or by doubling each term (as it would happen if every track is reconstructed twice).
A possible problem may arise from the fullevent plane resolution. The resolu tion of subevents, calculated from the difference between ΨA2 − ΨB2 (see eq.3.10), is safely under control because the average direction of Ψ2 does not change in pres ence of impurities. But the calculation of the fullevent resolution involves the observed multiplicity (eq.3.8), and a larger M would result in an overestimate of the resolution (and therefore, an underestimate of the measured v2).
For a few values of elliptic flow and multiplicity, fig.5.17 shows how the reso lution changes with respect to the fraction of impurity in the sample (values v2 and M are taken from the hydro parametrization, see tab.5.1).
The integrated purity 11 of the basic selection (constrainability condition of TPC + ITS tracks) is 90% (see fig.5.8). If the purity is weighted with pT (using the same weight as in the calculation of ~Q), the integrated purity becomes ∼ 93%, leading to a systematic error on the observed resolution smaller than 4% in the worst case (peripheral events).
11Integral of the purity convoluted with the observed pT spectra.
5.3 Genuine flow reconstruction (GeVSim) 97
GeV/c T
p 0 0.5 1 1.5 2 2.5 3
%
2
v
0
5
10
15
20
25
) Kine T
(p2v ) ESD T
(p2v Linear fit
) input T
(p2v GeV/c
T p
0 0.5 1 1.5 2 2.5 3
T dN
/d p
0
500
1000
1500
2000
2500
3000
3500 310×
Kine T
dN/dp ESD
T dN/dp
ESD T
dN/dp eff 1
Levy fit
Figure 5.18. (a) Linear fit of the reconstructed v2 as a function of pT (eq.5.14), with extra polation to pT = 0. The input value and the KineTree result are also shown. (b) Evaluation of the first pT bin (0 < pT < 100 MeV/c) and the associated error from the efficiency corrected dN/dpT spectrum, through the factor nlow (see eq.5.6), a Levy fit of the corrected spectra is also shown (eq.5.8). The full histogram represents the simulated spectrum, the lower set of data is the observed spectrum after the cuts (see sec.5.2) without efficiency correction. Those plots are taken from the centrality class 2 of the hydro simulations (see tab.5.1).
5.3.3 Differential flow of charged particles The shape of v2 as a function of pT is an important observable for determining the properties of the Equation of State (see sec.1.3.4). Moreover the study of elliptic flow with respect to the transverse momentum is needed for the evaluation of the integrated v2.
For pT bins small enough (i.e. in the order of the detector resolution), the recon struction efficiency can be considered roughly constant within each bin and there fore the differential shape of v2 versus pT can be measured without taking into account efficiency corrections.
According to the event plane analysis method (see sec.3.2 and [123]), v2(pT ) is obtained dividing the measured vobs2 by the event plane resolution, calculated as the average cos
[ 2 ( ∆Ψsub2
)] over the centrality class:
v2(pT ) = vobs2 (pT )
〈res2〉c.c. =
〈cos [2(φ−Ψ2)]〉pT bin⟨ cos [ 2(Ψobs2 −Ψtrue)
]⟩ c.c.
. (5.13)
Due to the high purity of the track selection (see fig.5.10(b)), no other systematic corrections are applied to the measured elliptic flow. The (small) effect of impurities is incorporated into the systematic error (see sec.5.3.5).
A linear fit going through the origin (fig.5.18(a)) is used to extrapolate the mea surement of v2(pT ) down to pT < 100 MeV/c:
v2(pT ) = a× pT . (5.14)
98 Simulations & Results
The fit interval is pT ∈ (0.1, 2) GeV/c, according to the input of the simulations (see sec.1.3.4).
In a real experiment, where the differential shape of v2(pT ) is not linear (see for example fig.1.13 [45, 64]), the extrapolation of v2(pT ) to pT = 0 could be also ap proximated linearly, due to the very small notcovered pT range and to the physical constraint v2(0) = 0. The limited number of particle produced at pT < 100 MeV/c (3.9% of the total 12 in the present parametrization of GeVSim, and 4.2% in Hijing) ensures that an uncertainty up to 20% on the extrapolated v2pT<100MeV/c would give an error on the integrated v2 smaller than 1%.
GeV/c T
p 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5
%
2
v
1 0 1 2
3 4
5 6 ) ESD
T (p2 v
Linear fit ) input
T (p2 v
LDL c.c.0
GeV/c T
p 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5
%
2
v
0
5
10
15
20
25 LDL c.c.1
GeV/c T
p 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5
%
2
v
0
10
20
30
40
50 LDL c.c.2
GeV/c T
p 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5
%
2
v
0
10
20
30
40
50 LDL c.c.3
GeV/c T
p 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5
%
2
v
0 5
10 15 20 25 30 35 40
LDL c.c.4
GeV/c T
p 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5
%
2
v
0
5
10
15
20
25
30 LDL c.c.5
Figure 5.19. Reconstructed v2 as a function of pT for the six centrality classes of the LDL sample, including the most central events with vin2 = 0. The input and a linear fit of the reconstructed data are also shown.
The three figures (fig.5.19, 5.20 and 5.21) show the reconstructed shape of v2 as a function of pT in the interval 0 < pT < 5 GeV/c, for the three sets of GeVSim simulations. The input values and a linear fit of the data are plotted as well. Only one set of simulations has been produced for the centrality class 0 (most central events, with v2 = 0), and it is shown in fig.5.19 together with the LDL sample.
As expected, the measured v2 is in perfect agreement with the input values as long as the event plane resolution is close to 1 (which is mostly the case). For the most peripheral events (c.c.5), due to the larger fluctuation in multiplicity (∼ 80%, see tab.5.1), the difference between particlewise and eventwise average is not neg
12Charged, ‘stable’ hadrons: π±, K±, p and p¯.
5.3 Genuine flow reconstruction (GeVSim) 99
) ESD T
(p2 v Linear fit
) input T
(p2 v
GeV/c T
p 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5
%
2
v
0 2
4
6 8
10 12
hydro c.c.1
GeV/c T
p 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5
%
2
v
0
5
10
15
20
25 hydro c.c.2
GeV/c T
p 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5
%
2
v
0 5
10 15 20 25 30 35
hydro c.c.3
GeV/c T
p 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5
%
2
v
0 5
10 15 20 25 30 35 40 45
hydro c.c.4
GeV/c T
p 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5
%
2
v
0 5
10 15 20 25 30 35 hydro c.c.5
Figure 5.20. Reconstructed v2 as a function of pT for the five centrality classes of the hydro sample. The centrality class 0 plot has not been repeated (see fig.5.19).
) ESD T
(p2 v Linear fit
) input T
(p2 v
GeV/c T
p 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5
%
2
v
0 2 4 6 8
10 12 14 16
hydro2 c.c.1
GeV/c T
p 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5
%
2
v
0 5
10 15 20 25 30 35 40
hydro2 c.c.2
GeV/c T
p 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5
%
2
v
0
10
20
30
40
50 hydro2 c.c.3
GeV/c T
p 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5
%
2
v
0
10
20
30
40
50
60 hydro2 c.c.4
GeV/c T
p 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5
%
2
v
0
10
20
30
40
50 hydro2 c.c.5
Figure 5.21. Reconstructed v2 as a function of pT for the five centrality classes of the hydro2 sample. Centrality class 0 is omitted (see fig.5.19).
100 Simulations & Results
ligible. The resolution is calculated from the event averaged cos ( 2∆Ψsub2
) , while
vobs2 is calculated from the particle averaged 〈cos (2 [Ψ2 − φ])〉. Higher multiplicity events add more particle with a larger vobs2 , but the event plane resolution (calcula ted as the average over all the events in the centrality class) gives all the events the same weight causing an overcorrection of the observed v2 and a consequent over estimate of the measured elliptic flow. This effect could be corrected by calculating the resolution as a weighted average over the events, with weights proportional to the selected multiplicity. However the effect is smaller than the statistical error on the measurements, and therefore it has not been taken into account.
5.3.4 Integrated v2 The integrated elliptic flow is calculated as the average between reconstructed va lues of v2 versus pT (see sec.5.3.3) weighted by the number of particle reconstructed at each pT of the dN/dpT distribution:
〈v2〉 = 1 Ntot
∑ pT bins
v2(pT )× dN obs
dpT × eff(pT ) . (5.15)
As explained in sec.5.2.1, the measured pT spectrum must be first corrected by the reconstruction efficiency of the selected sample (see also eq.3.18).
centrality class 1 2 3 4 5
> %
2 <
v
0
2
4
6
8
10
12
14 > ESD2<v
>2 <vsysσ
> Kine2<v
LDL
centrality class 1 2 3 4 5
> %
2 <
v
0
2
4
6
8
10
hydro
centrality class 1 2 3 4 5
> %
2 <
v
0
2
4
6
8
10
12
14
16 hydro2
Figure 5.22. Integrated v2 with respect to the centrality class for the three sets of GeVSim simulations. The plot shows the reconstructed 〈v2〉 from the AliESDs and the results of the event plane analysis on the KineTrees, statistical and systematic errors are also shown (see sec.5.3.5).
The number of particle with pT < 100 MeV/c is extrapolated as fraction of the total integral of dN/dpT (see sec.5.2):
NpT<100MeV/c = nlow × 1
eff Nobs , (5.16)
where Nobs/eff is the integral of the efficiency corrected spectrum observed at pT > 100 MeV/c, and nlow = 0.0406 (see eq.5.6). The result is shown in fig.5.18(b), together with the input spectrum from the KineTree for comparison.
5.3 Genuine flow reconstruction (GeVSim) 101
A linear fit is used to extrapolate the v2 measurement down to pT = 0 (see fig.5.18(a)). The mean value of v2 in the first bin is calculated from the fit function, evaluated at the mean value of the dN/dpT distribution between 0 and 100 MeV/c (calculated from the fit of the efficiency corrected pT spectrum, see eq.5.8).
Finally, figure 5.22 shows the integrated v2 with respect to the centrality class for the three sets of GeVSim simulations. As we can see, the simulated values of 〈v2〉 are well reproduced within the statistical (and systematic) error.
5.3.5 Systematic and Statistical Error on the measured v2 The only source of systematic error on the differential shape of v2 as a function of pT is due to the presence of impurities in the reconstructed spectra (‘impurities’ in each pT bin includes both secondary particles and primary particles reconstructed at a different pT ).
As shown in fig.5.23(a), at low transverse momentum (pT . 1 GeV/c) sec ondary particles have a larger v2 than primary particles (in a decay, the mother par ticle produces two daughters with roughly the same flow of the mother but a lower momentum), therefore the presence of contaminations increases the measured va lue of v2 at lower pT . The opposite effect (contamination with a lower v2) can also happen due to binshift, however the effect is completely negligible with respect to the statistical fluctuations (see fig.5.23(a), at pT > 2 GeV/c).
The systematic error on v2(pT ) is calculated from the difference ∆v2 between the measured v2 of correctly reconstructed primary particles and the measured v2 of the contamination found in the final ESD, weighted by the purity of the selection in each bin. The relative difference ∆v2/v2 is large only in the first few bins (pT < 500 MeV/c).
The overestimate on the measured v2 at low pT due to impurity can be expressed as:
vmeas2 (pT ) = pur(pT )× v′2(pT ) + (1− pur(pT ))× v′′2(pT ) . (5.17) The relative systematic error on the measured v2 is therefore obtained as:
v′2(pT )− vmeas2 (pT ) v′2(pT )
= (1− pur)× ∆v2 v′2
(pT ) . (5.18)
Weighting this contribution by the purity of the selected track sample, only the leftmost bin (100 < pT < 200 MeV/c), where where the magnitude of v′2 is small and the contamination is large, shows a big systematic error σv2/v2 ≃ 6.5% (see fig.5.23(b)). Otherwise the error is in the order of 1−2% on almost all the pT range of interest, becoming negligible for pT > 800MeV/c.
As a consequence, the integrated v2 is hardly affected by this level of contami nation, however it is possible to calculate an upper limit for the systematic error on
102 Simulations & Results
(GeV/c) T
p 0 0.5 1 1.5 2 2.5
sa t
2 /v 2
v
0
0.2
0.4
0.6
0.8
1
primaries2v secondaries2v
2 v∆
2 / v2 v∆
(GeV/c) T
p 0 0.5 1 1.5 2
2
v σ
0
0.1
0.2
0.3
0.4 2/v2vσ
2/v2 v∆ 1purity
Figure 5.23. (a) Reconstructed v2/vsat2 as a function of pT for all primary particles and for reconstructed secondaries in the ESD, the difference between the two and the relative contribution to the measured v2 are shown as well. Since the effect is similar for any input value of v2, this plot is produced from the whole set of GeVSim simulations, scaling each centrality class by its saturation v2. (b) Systematic error on the measured v2, calculated as (1− pur)×∆v2/v2 (the calculated impurity is also shown).
〈v2〉 due to the presence of impurities:
σtotv2 =
√∑ NpTσ
2 v2 (pT )
Ntot . 2.4% , (5.19)
where the sum has been limited to the interval 0.1 < pT < 1 GeV/c.
The systematic error on the integrated v2 is dominated by the uncertainty on the efficiency (as a function of pT ), calculated in sec.5.2.2.
The relative systematic error σ〈v2〉/ 〈v2〉 on the integrated flow is calculated as the difference σ〈v2〉 =
∣∣〈v2〉+ − 〈v2〉−∣∣, where: 〈v2〉± = 1
Ntot
∑ pT bins
v2(pT )× dN obs
dpT × (eff(pT )± σeff) , (5.20)
divided by the (measured) central value of 〈v2〉. This also includes the systematic uncertainty on the extrapolation of dN/dpT
between 0 and 100 MeV/c, where the two extremes are given by N1 ± 7% (see sec.5.2.2).
The result of this procedure is: σ〈v2〉 〈v2〉 =
1
〈v2〉 ∣∣〈v2〉+ − 〈v2〉−∣∣ ≃ 0.126 , (5.21)
which implies a systematic uncertainty on the central 〈v2〉 value of ±6.3%.
5.3 Genuine flow reconstruction (GeVSim) 103
Figure 5.22 shows that, with the number of events available, the systematic error σ〈v2〉 is large but comparable to the statistical error, calculated as vRMS2 /
√ Nobs
(where vRMS2 = √〈(〈v2〉 − v2)2〉).
However the statistical error associated to the present measurements is probably underestimated, due to the fact that the simulations in each centrality class have been produced with a fixed input value of 〈v2〉. Therefore the width of the v2 distribution within each centrality class is smaller than in a real experiment.
An upper limit on the statistical error on the integrated flow, with respect to the number of events available, is given by:
σstat < max(vRMS2 )√
Nevts , (5.22)
where vRMS2 = √ 〈(〈v2〉 − v2)2〉 is the spread of v2 within a single event.
The maximum spread in v2 is 200% (from particles maximally correlated to the event plane, to particles maximally anticorrelated), and the smallest multiplicty considered in the present analysis is 40 particles per unit rapidity (see tab.5.1 and 5.3), which gives a minimum of about 50 correctly reconstructed primary particles per event in the TPC volume 13. Therefore, the upper limit vRMS2 is max(vRMS2 ) = 4%, giving a σstat < 0.04/
√ Nevts.
The upper limit of the relative statistical error on v2 is given by (for 〈v2〉 ≥ 1%):
max(σstat/v2) = 4
v2(%) √ Nevts
< 4√ Nevts
, (5.23)
which becomes less than 4% as soon as 10.000 events are available, and σstat < 0.4% for Nevts = 1.000.000 (one day of ALICE run). We can compare eq.5.23 with the values listed in tab.5.5 (see also the discussion in sec.5.5).
5.3.6 Conclusions Fig.5.24 shows the integrated v2 with respect to the charged multiplicity at mid pseudorapidity (corrected by the total reconstruction efficiency) for the three sets of GeVSim simulations.
From this plot we can see how well the event plane analysis at ALICE can distinguish between different models describing the underlying physics of elliptic flow, in relation to the error associated to the measurement (both statistical and systematic errors are shown).
However, in these simulations nonflow effects are absent or very low (no jet correlations are there, only decays). In a real experiment they are expected to give a large contribution in the low multiplicity region (see sec.4.1 and 5.4).
13This number is approximately given by N ′TPC ∼ 1.8× dNdη × eff, with an efficiency about 64% (see sec.5.2.1).
104 Simulations & Results
η/d chdN
0 200 400 600 800 1000 1200 1400 1600 1800 2000
>
2 <
v
0 0.02 0.04 0.06 0.08
0.1 0.12 0.14
in>2 & <v meas>2LDL <v
in>2 & <v meas>
2 hydro <v
in>2 & <v meas>
2 hydro2 <v
meas>2 <vsysσ
Figure 5.24. Reconstructed value of 〈v2〉 as a function of the charged multiplicity at mid pseudorapidity for the three sets of GeVSim simulations, including statistical and systematic error. The input values of v2 and dN/dη are also shown (see tab.5.1).
5.4 Realistic scenario (Hijing + AfterBurner) In section 4.1 Hijing simulated events have been studied to quantify the nonflow correlations originating from jets and particle decays, and in section 4.3 we saw the combined effect of genuine elliptic flow and nonflow correlations.
This section will illustrate an analysis done on a realistic set of data, generated with Hijing plus the flow AfterBurner, and fully reconstructed in AliRoot.
5.4.1 Simulations details Events are produced in twelve centrality classes, each one with a fixed impact pa rameter. Particle multiplicity, its width, and the magnitudes of the integrated v2 are listed in tab.5.3.
The main vertex position is fixed at (x, y, z) = (0, 0, 0). The magnetic field, measured at the center of the solenoid, is ~B = 0.4 T.
Particle composition
The particle composition generated by Hijing is the result of its internal implemen tation of the hadronization processes [104].
Tab.5.4 shows the relative particle abundances, averaged over all the produced Hijing events. The relative ratios of the three type of charged primary hadrons considered in the analysis are 86.7% π±, 8.7% K±, 4.6% p and p¯.
A detailed study of the centrality dependence of the particle ratios has not been carried out, however, due to the implementation of Hijing as a superposition of many pp events, they are approximately constant within the statistical fluctuations of each event.
5.4 Realistic scenario (Hijing + AfterBurner) 105
Table 5.3. Details of the Hijing + AfterBurner simulations (generated separately in 12 centrality classes).
c.c. b (fm) dNch dη ± RMS
⟨ vhydro2
⟩ % Nevts
0 7.0 2528± 308 0.0 1k 1 7.5 2184± 303 1.32 2.2k 2 8.0 1860± 301 3.26 2.4k 3 8.6 1524± 295 4.75 1.4k 4 9.15 1264± 249 5.95 1.1k 5 9.7 992± 192 7.39 1k 6 10.6 652± 173 8.72 1k 7 11.5 405± 139 9.42 1.3k 8 12.2 253± 103 9.44 2.5k 9 13.1 121± 62 8.67 5.7k 10 13.6 84± 54 6.96 12k 11 14.1 43± 33 1.0 8k
pT and η spectra
The dN/dpT and dN/dη distributions generated by Hijing are shown in fig.5.25 for the three species of charged ‘stable’ hadrons considered in the analysis (the spectra in fig.5.25 are obtained as the sum over the entire sample). The pseudorapidity limits are −1.3 . η . 1.3.
(GeV/c) T
p 0 1 2 3 4 5 6 7 8 9 10
T /d
p i
dN
0 1 2 3 4 5 6 7 8 9 10
±pi ±K
pp ,
η 1.5 1 0.5 0 0.5 1 1.5
η /d i
dN
1.5 1 0.5 0 0.5 1 1.5
±pi ±K
pp ,
Figure 5.25. Hijing generated spectra of the three charged ‘stable’ hadrons (π±, K±, p and p¯): dN/pT (a) and dN/dη (b).
As we can see, the dN/dη distribution (fig.5.25(b)) is almost flat. The dN/dpT distribution (fig.5.25(a)) has a shape which can be described by eq.5.10 (see fig.5.13
106 Simulations & Results
Table 5.4. Total and relative particle abundances produced by the Hijing simulations (not all the particle species are listed, therefore %/tot does not add up to 100%).
p.type (%/tot) P.Id. %/tot %/‘stable’ h±
pions π+ 10.91 86.7 38.5% π− 10.92
π0 16.6 0
kaons K+ 1.01 8.7 10.8% K− 1.0
K0S 1.67 0
K0L 0.98
nucleons p 0.69 4.6 1.7% p¯ 0.68
n 0.68 0
n¯ 0.66
hyperons Λ0 , Λ¯0 0.73 0 1.6% Σ , Σ¯ 0.7 0
Ξ , Ξ¯ 0.16 0
Ω , Ω¯ 0.001 0
heavy mesons ρ , η , ω , φ0 13.2 0 photons γ 29 0 leptons e± , µ± , τ± 0.35 0
for comparison).
Elliptic flow v2
Unlike the simulations described in sec.4.1 and 4.3, the present set of fully recon structed events has been produced in 12 separated centrality classes, each one with a fixed value of v2 (determined by the geometry of the collision at a fixed impact pa rameter, see sec.1.3) but a not constant multiplicity, due to the fluctuations involved in the production processes (implemented in Hijing).
Elliptic flow versus centrality has been parametrized according to the hydrody namic model with lowest value of cs (see sec.1.3). Tab.5.3 summarizes the simula ted values of 〈v2〉 for the 12 centrality classes of the generated events.
The differential shape of v2(pT ) is linearly increasing up to its saturation value at psatT = 2 GeV/c (same as the other simulations). Integrating over the Hijing generated spectra of π±, K±, p and p¯, the saturation values of v2 are given by vsat2 = ki→s 〈v2〉, with ki→s = 4.49.
5.4 Realistic scenario (Hijing + AfterBurner) 107
5.4.2 Event plane and resolution
Due to the presence of nonflow effects, the ‘observed’ event plane resolution cal culated from ∆Ψsub2 is higher than the ‘true’ one (i.e.
⟨ cos [ 2(Ψobs2 −Ψtrue)
]⟩). In fig.5.26 the ‘true’ event plane resolution is compared to the ‘observed’ one(s), cal culated using two different definition of subevents (see also sec.4.3). The present results have been obtained from the KineTree of all simulated primary hadrons (π±, K±, p and p¯), fig.4.8(b) in sec.4.3 shows the same plot versus dN/dη.
centrality class 0 1 2 3 4 5 6 7 8 9 10 11
)]>
tr ue
Ψ  2
Ψ <
co s[2
(
0
0.2
0.4
0.6
0.8
1
expected (formula) )]>trueΨ2Ψtrue <cos[2(
sub)ηobserved ( observed (rndsub)
Figure 5.26. The generated distribution of cos ( 2 [ Ψtrue −Ψobs2
]) is compared to the result
of equation 3.8 (for the simulated values of M = 1.8× dN/dη and v2 respectively) and to the ‘observed’ event plane resolution, calculated from η and random subevents. The his togram shows the KineTree results versus the centrality class (see tab.5.3 for the simulations details).
The observed event plane resolution depends on the choice of the subevents, being closer to the ‘true’ one for η subevents. Therefore the fullevent resolution is extrapolated with the iterative procedure described in sec.3.2.1 using η subevents.
The same set of cuts described in sec.5.3.2 has been applied for the reconstruc tion of the event plane from the AliESDtracks, the choice of constrainable TPC+ITS tracks with no additional cuts gives the best resolution (i.e. the closest to the ‘opti mal’ one, calculated from all primary hadrons in the KineTree).
Fig.5.27 shows the observed resolution calculated from the reconstructed tracks using different sets of cuts, with and without pT weights in the calculation of ~Q2. As expected, the use of pT weights gives a higher resolution (closer to its saturation value), which better reproduces the true one.
The presence of nonflow effects is clearly noticeable when their magnitude is comparable with the magnitude of genuine collective flow, i.e. in most central and most peripheral events (first and last bins respectively).
108 Simulations & Results
centrality class 0 1 2 3 4 5 6 7 8 9 10 11
)]> tr
ue Ψ
 2 Ψ
< co
s[2 (
0
0.2
0.4
0.6
0.8
1
resolution (formula) ESD no cuts ESD constr.
DCA<0.5mmtESD
e.p. Resolution
centrality class 0 1 2 3 4 5 6 7 8 9 10 11
)]>
tr ue
Ψ  2
Ψ <
co s[2
(
0
0.2
0.4
0.6
0.8
1
resolution (formula) ESD no cuts ESD constr.
DCA<0.5mmtESD
wgt) T
e.p. Resolution (p
Figure 5.27. Observed event plane resolution versus centrality class, calculated from ∆Ψηsub2 , using different cuts on the reconstructed AliESDs. The two plots show the re sults using unitary (a) and pT weights (b) in the calculation of ~Q2. The ‘optimal’ values (i.e. the observed event plane resolution calculated from primary hadrons in the KineTree) are shown as well (see tab.5.3 for the simulations details).
5.4.3 Differential and integrated flow
The reconstruction of the differential shape of v2 is done in the same way as descri bed in sec.5.3.3. Figure 5.28 show the reconstructed shape of v2 as a function of pT in the interval 0 < pT < 5 GeV/c, for the twelve centrality classes of the Hijing + AfterBurner simulations. The input values are also shown.
The measured v2 is in good agreement with the input values in midcentral col lisions. The agreement is less accurate in the extreme cases (most central and most peripheral events) due to the fact that the magnitude of nonflow effects becomes comparable to the magnitude of the genuine elliptic flow.
The integrated v2 is calculated as in section 5.3.4. Efficiency corrections are applied to the observed dN/dpT spectrum (see sec.5.2.1), and the first bin of the dN/dpT histogram is evaluated as a fraction of the total integral of the corrected spectrum observed at pT > 100 MeV/c (see eq.5.9): N1 = nlow ×Na, with nlow = 0.0436. A linear fit of v2(pT ) is used to extrapolate the measurement of v2 down to pT = 0.
Fig.5.29 shows the integrated v2 as a function of the charged multiplicity (cor rected by the total reconstruction efficiency) for the twelve centrality classes of the Hijing + AfterBurner simulations. The statistical error on the measurements is vRMS2 /
√ Nobs, the relative systematic error due to the calculated efficiency, applied
cuts and contamination from secondaries is assumed to have the same magnitude as the one calculated for the GeVSim sample, therefore σ〈v2〉/ 〈v2〉 ≃ 6.3% (see sec.5.3.5).
As we can see, the simulated centrality dependence of elliptic flow is well repro
5.4 Realistic scenario (Hijing + AfterBurner) 109
GeV/c T
p 0 1 2 3 4 5
%
2 v
0
10
20
30
40
50 ) ESD T
(p2 v Linear fit
) input T
(p2 v
hijing+AB c.c.0
GeV/c T
p 0 1 2 3 4 5
%
2 v
0 2 4 6 8
10 12 14 16 18 hijing+AB c.c.1
GeV/c T
p 0 1 2 3 4 5
%
2 v
0
5
10
15
20 hijing+AB c.c.2
GeV/c T
p 0 1 2 3 4 5
%
2 v
0
5 10 15
20 25
30 35
hijing+AB c.c.3
GeV/c T
p 0 1 2 3 4 5
%
2 v
0 5
10 15 20 25 30 35 40 45 hijing+AB c.c.4
GeV/c T
p 0 1 2 3 4 5
%
2 v
0
10
20
30
40
50 hijing+AB c.c.5
GeV/c T
p 0 1 2 3 4 5
%
2 v
0
10
20
30
40
50 hijing+AB c.c.6
GeV/c T
p 0 1 2 3 4 5
%
2 v
0
10
20
30
40
50 hijing+AB c.c.7
GeV/c T
p 0 1 2 3 4 5
%
2 v
0
10
20
30
40
50
60 hijing+AB c.c.8
GeV/c T
p 0 1 2 3 4 5
%
2 v
0
10
20
30
40
50
60 hijing+AB c.c.9
GeV/c T
p 0 1 2 3 4 5
%
2 v
0
10
20
30
40
50
60 hijing+AB c.c.10
GeV/c T
p 0 1 2 3 4 5
%
2 v
0
20
40
60
80
100 hijing+AB c.c.11
Figure 5.28. Reconstructed v2 as a function of pT for the 12 centrality classes of the Hijing + AfterBurner simulations. The input and a linear fit of the reconstructed data are also shown.
110 Simulations & Results
η/d chdN
0 500 1000 1500 2000
>
2 <
v
0
0.02
0.04
0.06
0.08
0.1
meas> 2
Hijing <v > hydroin2 <v
meas>2 <vsysσ
)2 v∆ nonflow (
Figure 5.29. Reconstructed 〈v2〉 versus dN/dη for the Hijing + AfterBurner simulations, including statistical and systematic error (see sec.5.3.5). The input values of v2 are shown as well, and from the difference between the input and the reconstructed 〈v2〉 the observed magnitude of nonflow effects is drawn.
duced within the statistical error on a wide range of centrality classes (midcentral events). However, nonflow effects cause the reconstructed v2 to be larger than the input one, especially in very peripheral collisions. This is shown by the difference between the input and the reconstructed 〈v2〉 (see fig.5.29).
5.5 Conclusions Using the results obtained up to here, it is possible to give an overview of the known sources of experimental uncertainties affecting the elliptic flow measurement at AL ICE with the event plane method.
To correctly estimate the statistical uncertainty, it must be taken into account that the simulations presented in this chapter were produced in separate centrality classes, each with a fixed value of v2. Therefore the statistical error on v2 (calculated from the vRMS2 , see sec.5.3.5) is underestimated.
For a more reliable prediction, the statistical errors are extrapolated from a set of simulations produced with a continuum impact parameter distribution (7 < b < 14.5 fm) where v2 is assigned to each event with respect to its impact parameter, but the centrality class selection is based on the final particle multiplicity. The Kine Trees of the Hijing + AfterBurner simulations (with no detector reconstruction) presented in sec.4.3 have been used for this purpose, where the centrality depen dence of 〈v2〉 follows the hydro parametrization (see sec.1.3.2).
Events are divided into five centrality classes, defined as 20% of the total inelas tic cross section (i.e. 20% of the total integral of the Hijing multiplicity distribution,
5.5 Conclusions 111
with rescaled impact parameter 7 < b < 14.5 fm). The statistical errors on v2 ob tained in this way have been scaled to take into account the efficiency of the detector and the applied cuts (only 64.3% of the primary particles are actually reconstructed, see sec.5.2.1).
Table 5.5. Summary table of the errors associated to the elliptic flow measurement (from a sample of 50.000 minimumbias Hijing + AfterBurner events, with elliptic flow from the hydro extrapolation). Centrality classes are defined as 20% of the total inelastic cross section.
% c.s. dNch dη
〈vtrue2 〉% σstat σsys σnonflow 0− 20 > 1450 3.67% 0.04 0.20 0.12 20− 40 670− 1450 7.87% 0.03 0.43 0.10 40− 60 260− 670 9.74% 0.04 0.53 0.01 60− 80 260− 100 8.09% 0.10 0.44 0.49 80− 100 < 100 4.50% 0.38 0.25 2.88 0− 100 0 ∼ 2500 6.76% 0.05 0.37 0.25
Table 5.5 summarizes the three sources of uncertainty that have been considered in the present analysis: statistical error, systematic error, nonflow contributions. The statistical errors (σstat) listed in tab.5.5 are calculated as:
σstat = vRMS2√
eff×N c.c.evts , (5.24)
whereN c.c.evts is the number of events in each centrality class (i.e. N c.c.evts ≃ 15×50.000). Assuming that 10 minimum bias events per second are reconstructed in the ALICE central barrel detector, this corresponds to less than two hours of heavy ion run at LHC.
We immediately see that, provided few days of heavy ion run, the statistical error becomes negligible with respect to the systematic. Nonflow effects represent a large source of uncertainty only at low multiplicity (most peripheral events), while they could be neglected for midcentral events.
112 Simulations & Results
Chapter 6
Conclusions
The last part of the previous chapter gave an overview of the sources of experimental uncertainties on the measurement of elliptic flow, as developed in this thesis.
Since v2 is calculated as an averaged quantity, its statistical error scales with the square root of the number of events available (σ〈 〉 = σ/
√ N ), therefore in a few
days of heavy ion run, the statistical error will becomes negligible with respect to the systematic uncertainty and to the magnitude of nonflow effects (see sec.5.5).
The systematic error is large mainly due to the way efficiency corrections are calculated, and only a small contribution is due to the presence of impurities (which cause an overestimate of v2 at low pT ).
• A larger sample of simulated events would allow a detailed study of the effi ciency with respect to the particle multiplicity, eliminating in this way a large contribution to the systematic error, which is due to the multiplicity depen dence of the efficiency (see sec.5.1.3).
• A detailed study of the particle ratios (and their pT dependence) in PbPb col lisions at LHC energy would remove the uncertainty due to the unknown par ticle admixture, which also contributes to the systematic error the efficiency (see sec.5.1.2).
• However, only a better characterization of the ITS resolution, and the imple mentation of a fitpoints dependent cut, could reduce the systematic error due to the applied cut on the transverse DCA (see sec.5.2).
The error on the measured v2 at low pT could be reduced by increasing the pu rity of the selection (but this will alsoreduce the statistics, especially at low pT , see sec.5.3.5), or by extending the linear fit of v2(pT ) to extrapolate v2 up to 200− 300 MeV/c. However, since the actual shape of v2(pT ) is generally not linear, a better fit function should be modeled on available experimental data.
114 Conclusions
The contributions due to nonflow correlations can be large (assuming they are well described by Hijing), anyway they mainly affect peripheral events (dN/dη < 200300). At higher multiplicity, and especially in midcentral events, where the genuine elliptic flow is expected to be large, nonflow contributions become less important and they could be neglected for a preliminary flow analysis (see sec.4.3 and 5.4.3).
However, nonflow correlations cannot be completely eliminated by the event plane formalism alone, and therefore other analysis methods should be used. For this reason, both the Cumulants and the LeeYan zero methods are currently under implementation in the AliRoot environment.
Appendix A
Class Description
The following is a list of the C++ classes implemented in the AliFlow package, with a brief description of their purpose. The HTML documentation of the AliFlow package can be automatically generated from the source files (with ROOT THtml) or found on the web [131].
AliFlowEvent
The AliFlowEvent class contains global event variables, such as event and run num ber, trigger signal, and other event observables such as the signals from the ZDC or the FMD. An object array (ROOT TClonesArray class) stores the reconstructed track candidates (AliFlowtrack class, see below), and another array is filled with the reconstructed neutral secondary vertices (AliFlowV0 class).
The AliFlowEvent class inherits from the basic ROOT TObject, so that it can be chained into a TChain or written to disk in a ROOT file. Due to the reduced amount of informations that are stored, the size of an AliFlowEvent object is about 1/10 of the original AliESD.
The class implements methods to split the event into random or η subevents and to calculate eventbyevent quantities (such as ~Qn and Ψn of the full and the sub event) for a given selection of track candidates, with or without pT or η weights. The class also contain the φ weight structure as a static pointer, which has to be filled at the beginning of the analysis loop with the calculated φ weights (see sec.3.2.3). Bayesian ‘a priori’ probabilities for particle identification can be also assigned in this way (see sec.2.3.2).
The AliFlowEvent data structure enables the event plane analysis by default and the same data structure can be used to implement the Cumulants and the Lee Yan zero analysis (see sec.3.4). Some of the methods to calculate the generating function for the cumulants analysis have been ported from the StFlowEvent code to the AliFlowEvent, however they have not been tested so far.
116 Class Description
AliFlowTrack
The AliFlowTrack class summarizes the information of the AliESDtracks stored in the ESD. Data members of this class are the kinematic variables pT , η and φ, for both the constrained and the unconstrained fit of the track (see sec.2.3), together with their χ2 and distance of closest approach to the main vertex.
Track parameters are limited to the four central detectors (ITS, TPC, TRD and TOF, see sec.2.1), for each of them, the number of fit points, number of findable clusters and dE/dx signal (time signature for the TOF) are stored. The Bayesian probability for each particle hypothesis is also stored in an array 4× 5 (detectors × ALICE p.Id., see sec.2.3.2).
The class also contains a pointer to an array of boolean flags, filled during the loop for the determination of the event plane, that allows to discriminate if a track was included or not in the calculation of Ψn for a given selection (its contribution can be then subtracted from ~Qn to avoid autocorrelation effects, see sec.3.2.2). A similar structure is repeated for the subevent selection.
AliFlowV0
Neutral decay vertices can be stored as AliFlowV0 objects in a separate TClones Array in the AliFlowEvent. The AliFlowV0 class contains the kinematic variables (pT , η and φ), the V 0 position with respect to the primary vertex (decay length), the invariant mass, the most probable particle identification hypothesis and some reconstruction parameters, such as the DCA of the 2 tracks at the crossing point and the combined χ2. The AliFlowV0 also stores two pointers to the daughter tracks in the AliFlowTracks array.
AliFlowSelection
The AliFlowSelection class is used to select events, tracks for the determination of the event plane, and tracks and V 0s for the correlation analysis.
Data members of this class are integer or floating point numbers, defining the interval of acceptance for selecting:
• events (typically to select a particular centrality class, e.g. multiplicity limits at midrapidity);
• tracks for the determination of the event plane (e.g. constrainable tracks with TPC + ITS signal), more sets of cuts can be tested in a single run (see sec.3.3.1);
• tracks (and V 0s) selection for the correlation analysis (e.g. track candidates with a tDCA < 100 µm), those are the particles that enter the calculation of v2 (eq.3.6).
117
An AliFlowSelection object must be instantiated at the beginning of the analysis (previous to the flattening φ weights loop) and filled with the desired set(s) of cuts. Only cuts that are explicitly set in the AliFlowSelection object are applied to the analysis.
Once the cuts are defined, the method AliFlowSelection::Select(* TObject) re turns true or false whether the event/track/V 0 is selected. If more selections are used for the determination of the event plane, the harmonic and selection number must also be specified in the method. The selection of V 0 candidates for the correlation analysis also requires a invariant mass cut. The flow coefficients are then calculated within the specified mass range and in two equivalent sidebands, to estimate the flow of the background.
In the present thesis, the event selection is only based on the observed multi plicity (see sec.4.3). The optimal cuts for the determination of the event plane are optimized to achieve the best resolution (see sec.5.3.2), and the cuts applied for the correlation analysis of charged particles are optimized for the selection of primaries (see sec.5.2).
AliFlowAnalyser
The AliFlowAnalyser class performs the event plane analysis over the AliFlow Events (see fig.3.4), and produces a default set of histograms summarizing the re sults.
The AliFlowEvent loop has to be implemented externally, providing more fle xibility (such as the possibility to perform on the fly analysis while looping on the AliESDs). The class is instantiated at the beginning of the event loop, and an Ali FlowSelection object must be provided to apply the required cuts (the flattening φ weight histograms can also be loaded at this step).
The whole execution is driven by three methods:
Init is called just once at the beginning to initialize the histograms and set the analysis parameters (e.g. use of pT weights, choice of the subevents);
Make is called per each AliFlowEvent in the loop, it performs the event selection, the determination of the event plane(s) of the full and the subevents, and fills the profile histograms of vobs2 and cos(∆Ψsub2 );
Finish concludes the analysis by calculating the global resolution with the subevents method (average is taken over the all the selected events), and by correcting the observed flow coefficients. If the efficiency histogram versus pT is pro vided, it also calculates the integrated flow.
All the analysis histograms are saved in a ROOT file. Therefore, both the reso lution and the efficiency corrections can be also applied in a later stage.
118 Class Description
AliFlowConstants
The namespace AliFlowConstants has the purpose to store static data members that do not need to be changed during the analysis, e.g. the number of selections in use for the event plane determination, the number of bins of the various histogram, the definitions of centrality classes. Any change on those numbers require the AliFlow package to be recompiled.
AliFlowMakers
The AliFlowMaker class is the interface between the ALICE event summary data and the AliFlowEvent, i.e. a parser that reads the useful values from AliESD objects and organizes them into the AliFlowEvent structure.
The AliFlowKineMaker class is the interface between the kinematic tree pro duced by the event generator and the AliFlowEvent. The AliFlowKineMaker is not a fast event simulator (no smearing is applied on the original particles kinematic, no detector information is produced), it just creates clean AliFlowEvent objects that can enter the same analysis chain as the reconstructed events. Most of the data members of the AliFlowEvent are left empty or filled with dummy values (100% p.Id. probability, fit χ2 = 1, ...).
This approach has been very useful to test the functionalities of the event plane analysis on an ideal input, without going through the full reconstruction chain of AliRoot (which can be very time consuming, see sec.2.2.2): an event generator is used to generate events with the chosen flow and particle multiplicity (transport is switched off in AliRoot), an the produced KineTrees of particles with exact mo mentum, production vertex and particle Id., are converted into AliFlowEvents and submitted to the analysis chain (this is the approach used in chapter 4).
Some very wide quality cuts are applied at this step:
• only AliESDtracks with TPC signal are taken from the AliESD; • only primary particles, or secondaries associated to an AliESDtrack (if ‘la
bels’ are available), are imported from the KineTree. Both the ‘flow makers’ can be used on the fly, creating the AliFlowEvents and
directly submitting them to the flow analysis, or the ‘maker’ phase can be splitted from the analysis ‘phase’, by storing the AliFlowEvents to disk.
In the latest developments both the AliFlowMaker and AliFlowKineMaker have been embedded in an AliAnalysisTask or an AliSelector (see below).
AliFlowTask (ex AliSelectorFlow) Later developments of AliRoot have added functionalities to run a complete analysis over simulated events (see fig.3.3). The class AliSelectorRL (inherited from the ROOT TSelector), later replaced by the class AliAnalysisTaskRL (inherited from the
119
ROOT TTask, which also allows distributed analysis), performs in parallel the loop over AliESDs and KineTrees. For each reconstructed event, the AliStack and the simulated KineTree are also opened to give access to the kinematic information of the generated particles, and by using the ‘labels’ stored in the AliESDtracks, each track candidate can be compared to the simulated particle that produced the hits in the detector from which the track is fitted (see sec.2.3).
If the AliFlowMakers are executed through an AliAnalysisTaskRL, two AliFlow Events are created (from both the AliESD and the KineTree), and the connection between particles and tracks is preserved. The reconstruction efficiency and purity can be also studied at this step (see below).
EffHist, EpHist, CutEff
Few additional classes have been implemented outside the AliFlow package to study the efficiency of the track reconstruction and the effect of the applied cuts:
• EffHist is a class to study the reconstruction efficiency and purity as a function of pT , η, φ and particle type for a given set of cuts;
• EpHist is a class to study the ‘true’ and ‘observed’ event plane resolution as a function of the applied cuts;
• CutEff is a class to study the dependence of efficiency and purity with respect to some specific observables (e.g. the tDCA or the fit χ2).
Without going into the details of their implementation, the general idea is to pro vide a structure that allows to easily calculate the amount of primary and secondary particles passing a given set of cuts.
Using the Monte Carlo information from the KineTree, the sensitive distribu tions (such as dN/dpT or dN/dη) of the reconstructed tracks are ordered in a three dimensional array (4 if we also include the particle type), which dimensions are given respectively by the number of applied cuts (n selections can be used, each one sharpening the cuts), the primary condition (track reconstructed from a primary particle, from a secondary particle, from a double counted primary or from a dou ble counted secondary), and the momentum resolution (track reconstructed inside or outside the pT bin of the generated particle, see sec.5.1). The same distributions are generated also from the KineTree of primary particles.
Simple operation between the produced histograms lead to the calculation of the track reconstruction efficiency and purity as function of pT , η, applied cuts (see sec.5.1 for the definitions). Those classes have been extensively used to produce the results shown in sec.5.1, 5.2 and 5.3.2.
However, due to the recent implementation of a more general ‘efficiency frame work’ in AliRoot, they have not been included in the AliFlow package.
120 Class Description
Bibliography
[1] M.Cheng et al., The QCD equation of state with almost physical quark masses, arXiv:0710.0354 [heplat], 2007.
[2] F.Karsch and E.Laermann, Quark Gluon Plasma 3, World Scientific (2003) . [3] E.Laermann and O.Philipsen, The Status of Lattice QCD at Finite Temperature, Ann.
Rev. Nucl. Part. Sci. 53 (2003) 163. [4] F.Karsch, E.Laermann, and A.Peikert, The Pressure in 2, 2+1 and 3 Flavour QCD,
Phys. Lett. B478 (2000) 447. [5] M.Riordan and W.A.Zajc, The first few microseconds, Scientific American 294
(2006) 24. [6] Relativistic Heavy Ion Collider (RHIC) at Brookhaven National Laboratory (BNL),
http://www.bnl.gov/rhic/.
[7] M.Gyulassy and L.McLerran, New Forms of QCD Matter Discovered at RHIC, Nucl. Phys. A750 (2005) 30.
[8] P.Steinberg, Hotter, Denser, Faster, Smaller... and NearlyPerfect: What’s the matter at RHIC?, arxiv:nuclex/0702020, 2007.
[9] CheukYin Wong, Introduction to HighEnergy HeavyIon Collisions, World Scien tific Publishing Co., Singapore, 1994.
[10] K.M.O’Hara et al., Observation of a Strongly Interacting Degenerate Fermi Gas of Atoms , Science 298 (2002) 2179.
[11] S.A.Voloshin and Y.Zhang, Methods for analysing anisotropic flow in relativistic nuclear collision, Z. Phys. C70 (1996) 665.
[12] C.Adler et al. (STAR Collaboration), Elliptic flow from two and fourparticle corre lations in Au + Au collisions at √sNN = 130 GeV, Phys. Rev. C66 (2002) 034904.
[13] C.Alt et al. (NA49 Collaboration), Directed and elliptic flow of charged pions and protons in Pb+Pb collisions at 40 and 158 AGeV, Phys. Rev. C68 (2003) 034903.
[14] S.A.Voloshin, Energy and system size dependence of charged particle elliptic flow and v2/ǫ scaling, Quark Matter 2006 proceedings (2007) .
122 BIBLIOGRAPHY
[15] G. Torrieri, Scaling of v2 in heavy ion collisions, Phys. Rev. C76 (2007) 024903. [16] M.Miller and R.J.M.Snellings, Eccentricity fluctuations and its possible effect on
elliptic flow measurements, arXiv:nuclex/0312008, 2003. [17] P.F.Kolb et al., Elliptic flow at SPS and RHIC: from kinetic transport to hydrody
namics, Phys. Lett. B500 (2001) 232.
[18] P.F.Kolb and U.Heinz, Hydrodynamic description of ultrarelativistic heavyion col lisions, World Scientific QGP3 (2004) .
[19] L.D.Landau, On the multiparticle production in highenergy collisions, Izv. Akad. Nauk. Ser.Fiz.17 (1953) 51.
[20] R.J.Glauber, Lectures on Theoretical Physics, Interscience NY Vol.1 (1959) .
[21] M.L.Miller et al., Glauber Modeling in High Energy Nuclear Collisions, Ann. Rev. Nucl. Part. Sci. 57 (2007) .
[22] H.DeVries, C.W.DeJager, and C.DeVries, Nuclear ChargeDensityDistribution Pa rameters from Elastic Electron Scattering, Atomic Data and Nuclear Tables (1987) 36 (1987) .
[23] W.M.Yao et al., Review of Particle Physics, J. Phys. G33 (2006) 1. [24] The Particle Data Group, http://pdg.lbl.gov/.
[25] ALICE collaboration, ALICE Physics Performance Report, Volume 2, CERN/LHCC 30 (2005) .
[26] M.L.Miller et al., Hard scattering cross sections at LHC in the Glauber approach: from pp to pA and AA collisions, CERN Yellow Report on Hard Probes in Heavy Ion Collisions at the LHC (2004) .
[27] A.Bialas, M.Bleszynski, and W.Czyz, Multiplicity Distributions In NucleusNucleus Collisions At HighEnergies, Nucl. Phys. B111 (1976) 461.
[28] P.Jacobs and G.Cooper, Spatial Distribution of Initial Interactions in High Energy Collisions of Heavy Nuclei, arXiv:nuclex/0008015, 2000.
[29] T.Hirano and Y.Nara, Hydrodynamic afterburner for the Color Glass Condensate and the parton energy loss, Nucl. Phys. A743 (2004) 305.
[30] A. Adil et al., The eccentricity in heavyion collisions from Color Glass Condensate initial conditions, Phys. Rev. C74 (2006) 044905.
[31] J.Y. Ollitrault, Relativistic hydrodynamics, arxiv:0708.2433 [nuclth], 2007.
[32] J.D.Bjorken, Highly relativistic nucleusnucleus collisions: The central rapidity re gion, Phys. Rev. D27 (1983) 140.
BIBLIOGRAPHY 123
[33] S.A.Voloshin, Toward the energy and the system size dependece of elliptic flow: working on flow fluctuations, Conference Proceedings for the 22nd Winter Workshop on Nuclear Dynami (2006) .
[34] S.Manly et al. (PHOBOS Collaboration), System size, energy and pseudorapidity dependence of directed and elliptic flow at RHIC, Nucl. Phys. A774 (2006) 523.
[35] R.S.Bhalerao and J.Y.Ollitrault, Eccentricity fluctuations and elliptic flow at RHIC, Phys. Lett. B641 (2006) 260.
[36] H.Heiselberg and A.M.Levy, Elliptic Flow and HBT in noncentral Nuclear Colli sions, Phys. Rev. C59 (1999) 2716.
[37] I.G.Bearden et al. (NA49 Collaboration), Collective Expansion in High Energy Heavy Ion Collisions, Phys. Rev. Lett. 78 (1997) 2080.
[38] B.B.Back et al. (PHOBOS Collaboration), Identified hadron transverse momentum spectra in Au+Au collisions at √sNN = 62.4 GeV, Phys. Rev. C75 (2007) 024910.
[39] S.S.Adler et al. (PHENIX Collaboration), Identified Charged Particle Spectra and Yields in Au+Au Collisions at √sNN = 200 GeV, Phys. Rev. C69 (2004) 034909.
[40] V.Greco, C.M.Ko, and P.Levai, Parton coalescence and antiproton/pion anomaly at RHIC, Phys. Rev. Lett. 90 (2003) 202302.
[41] J.Adams et al. (STAR Collaboration), Identified hadron spectra at large transverse momentum in p+p and d+Au collision at √sNN = 200 GeV, Phys. Lett. B637 (2006) 161.
[42] P.BraunMunzinger, K.Redlich, and J.Stachel, Particle Production in Heavy Ion Collisions, arxiv:nuclth/0304013, 2003.
[43] J.P.Blaizot and J.Y.Ollitrault, Hydrodynamics Of A Quark  Gluon Plasma Undergo ing A Phase Transition, Nucl. Phys. A458 (1986) 745.
[44] J.Cleymans and K.Redlich, Unified Description of FreezeOut Parameters in Relati vistic Heavy Ion Collisions, Phys. Rev. Lett 81 (1998) 5284.
[45] J.Adams et al., Experimental and theoretical challenges in the search for the quark gluon plasma: The STAR Collaboration’s critical assessment of the evidence from RHIC collisions, Nucl. Phys. A757 (2005) 102.
[46] H.Sorge, Flavor Production in Pb(160 AGeV) on Pb Collisions: Effect of Color Ropes and Hadronic Rescattering, Phys. Rev. C52 (1995) 3291.
[47] S.A.Bass et al., Microscopic Models for Ultrarelativistic Heavy Ion Collisions, Prog. Part. Nucl. Phys. 41 (1998) 225.
[48] M.Bleicher et al., Relativistic HadronHadron Collisions in the UltraRelativistic Quantum Molecular Dynamics Model (UrQMD), J. Phys. G25 (1999) 1859.
124 BIBLIOGRAPHY
[49] S.A.Voloshin and A.M.Poskanzer, The physics of the centrality dependence of ellip tic flow, Phys. Lett. B474 (2000) 27.
[50] R.Hagedorn and J.Ranft, Statistical thermodynamics of strong interactions at high energies. 2. Momentum spectra of particles produced in p p collisions, Nuovo Ci mento Suppl.6 (1983) 169.
[51] D.Teaney, The Effect of Shear Viscosity on Spectra, Elliptic Flow, and HBT Radii, Phys. Rev. C68 (2003) 034913.
[52] R.Baier, P.Romatschke, and U.A.Wiedemann, Dissipative Hydrodynamics and Heavy Ion Collisions, Phys. Rev. C73 (2006) 064903.
[53] T.Hirano, Hydrodynamic models, J. hys. G30 (2004) S845.
[54] P.Kovtun, D.T.Son, and A.O.Starinets, Viscosity in Strongly Interacting Quantum Field Theories from Black Hole Physics, Phys. Rev. Lett. 94 (2005) 111601.
[55] H.J.Drescher et al., The centrality dependence of elliptic flow, the hydrodynamic limit, and the viscosity of hot QCD, Phys. Rev. C76 (2007) 024905.
[56] P.F.Kolb, J.Sollfrank, and U.Heinz, Anisotropic transverse flow and the quark hadron phase transition, Phys. Rev. C62 (2000) 054909.
[57] R.S.Bhalerao, J.P.Blaizot, N.Borghini, and J.Y.Ollitrault, Elliptic flow and incom plete equilibration at RHIC, Phys. Lett. B627 (2005) 49.
[58] N.Armesto, C.A.Salgado, and U.A.Wiedemann, Relating highenergy lepton hadron, protonnucleus and nucleusnucleus collisions through geometric scaling, Phys. Rev. Lett. 94 (2005) 022002.
[59] P.F.Kolb et al., Centrality dependence of multiplicity, transverse energy, and elliptic flow from hydrodynamics, Nucl. Phys. A696 (2001) 197.
[60] K.GolecBiernat and M.Wusthoff, Saturation Effects in Deep Inelastic Scattering at low Q2 and its Implications on Diffraction, Phys. Rev. D59 (1999) 014017.
[61] H.J.Drescher, A.Dumitru, and J.Y.Ollitrault, The centrality dependence of elliptic flow at LHC, Proceedings of the CERN Workshop Heavy Ion Collisions at the LHC: Last Call for Predictions (2007) .
[62] P.Huovinen, Anisotropy of flow and the order of phase transition in relativistic heavy ion collisions, Nucl. Phys. A761 (2005) 296.
[63] J.Adams et al. (STAR Collaboration), Azimuthal anisotropy in Au+Au collisions at√ sNN = 200 GeV, Phys. Rev. C72 (2005) 014904.
[64] A.Adare et al. (PHENIX Collaboration), Scaling properties of azimuthal anisotropy in Au+Au and Cu+Cu collisions at √sNN = 200 GeV, arxiv:nuclex/0608033, 2006.
BIBLIOGRAPHY 125
[65] C.Adler et al. (STAR Collaboration), Azimuthal anisotropy and correlations in the hard scattering regime at RHIC, Phys. Rev. Lett. 90 (2003) 032301.
[66] T.Hirano, Is early thermalization achieved only near midrapidity at RHIC ?, Phys. Rev. C65 (2002) 011901.
[67] T.Hirano and K.Tsuda, Collective flow and twopion correlations from a relativistic hydrodynamic model with early chemical freeze out, Phys. Rev. C66 (2002) 054905.
[68] B.B.Back et al. (PHOBOS Collaboration), The PHOBOS Perspective on Discoveries at RHIC, Nucl. Phys. A757 (2005) 28.
[69] ALICE, A Large Ion Collider Experiment, http://aliceinfo.cern.ch/.
[70] The Large Hadron Collider (LHC) at CERN, http://lhc.web.cern.ch/lhc/.
[71] ALICE collaboration, ALICE Physics Performance Report, Vol.1, CERN/LHCC 049 (2003) .
[72] ALICE Collaboration, Technical Proposal, CERN/LHCC 71 (1995) .
[73] ALICE Collaboration, Technical Proposal, Addendum1, CERN/LHCC 32 (1996) .
[74] ALICE Collaboration, Technical Proposal, Addendum1, CERN/LHCC 13 (1999) .
[75] ALICE Collaboration, Technical Design Report of the Inner tracking System, CERN/LHCC 12 (1999) .
[76] ALICE Collaboration, Technical Design Report of the TimeProjection Chamber, CERN/LHCC 01 (2000) .
[77] ALICE Collaboration, Technical Design Report of the transitionRadiation Detector, CERN/LHCC 21 (2001) .
[78] ALICE Collaboration, Technical Design Report of the TimeOfFlight Detector, CERN/LHCC 12 (2000) .
[79] ALICE Collaboration, Technical Design Report of the TimeOfFlight Detector, Addendum1, CERN/LHCC 16 (2002) .
[80] ALICE Collaboration, Technical Design Report of the HighMomentum Particle Identification Detector, CERN/LHCC 19 (1998) .
[81] ALICE Collaboration, Technical Design Report of the Photon Spectrometer, CERN/LHCC 04 (1999) .
[82] ALICE Collaboration, Technical Design Report of the ZeroDegree calorimeter, CERN/LHCC 05 (1999) .
[83] ALICE Collaboration, Technical Design Report of the Forward Muon Spectrometer, CERN/LHCC 22 (1999) .
126 BIBLIOGRAPHY
[84] ALICE Collaboration, Technical Design Report of the Forward Muon Spectrometer, Addendum1, CERN/LHCC 46 (2000) .
[85] ALICE Collaboration, Technical Design Report of the Photon Multiplicity Detector, CERN/LHCC 32 (1999) .
[86] ALICE Collaboration, Technical Design Report of the Photon Multiplicity Detector, Addendum1, CERN/LHCC 38 (2003) .
[87] ALICE Collaboration, Forward detectors technical design report, http://alice.web.cern.ch/Alice/tdr/fmdv0t0/web/, 2007.
[88] D.Evans et al. (ALICE Collaboration), The ALICE central trigger system, Real Time Conference, 14th IEEENP 410 (2005) .
[89] ALICE Collaboration, ALICE DAQ and ECS User’s Guide (ALICE Internal Note/DAQ), ALICEINT 015 (2005) .
[90] P.Fonte, A.Smirnitski, and M.C.S.Williams, A new highresolution TOF technology, Nucl. Instrum. and Methods A443 (2000) 201.
[91] ROOT, an ObjectOriented Data Analysis Framework, http://root.cern.ch/. [92] NA49, Large Acceptance Hadron Detector for an Investigation of Pbinduced Reac
tions at the CERN SPS, http://na49info.web.cern.ch/na49info/.
[93] R.Brun and F.Rademakers, ROOT: An object oriented data analysis framework, Nucl. Instrum. and Methods A389 (1997) 81.
[94] CINT, the C/C++ Interpreter, http://root.cern.ch/root/Cint.html.
[95] GCC, the GNU Compiler Collection, http://gcc.gnu.org/.
[96] STAR Collaboration (computing), STAR C++ Class library, STAR Internal Note (2006) .
[97] N.J.A.M. Van Eindhoven, GALICE The Geant based ALICE detector simulation package, ALICE/SIM 44 (1995) .
[98] R.Brun et al., GEANT3, Internal report CERN DD/EE/841 .
[99] GEANT, Detector Description and Simulation Tool, http://wwwasd.web.cern.ch/wwwasd/geant/.
[100] FLUKA, a fully integrated particle physics MonteCarlo simulation package, http://www.fluka.org/.
[101] The ALICE Offline Project, http://aliceinfo.cern.ch/Offline/. [102] X.N.Wang and M.Gyulassy, HIJING: A Monte Carlo Model for Multiple Jet Pro
duction in pp, pA and AA Collisions, Phys. Rev. D44 (1991) 3501.
BIBLIOGRAPHY 127
[103] M.Gyulassy and X.N.Wang, HIJING 1.0: A Monte Carlo Program for Parton and Particle Production in High Energy Hadronic and Nuclear Collisions, LBL34246 (2000) .
[104] HIJING Monte Carlo Model, http://wwwnsdth.lbl.gov/∼xnwang/hijing/. [105] B.Andersson et al., Parton fragmentation and string dynamics, Phys. Rep. 97 (1983)
31.
[106] H.U.Bengtsson and T.Sjostrand, The Lund Monte Carlo For Hadronic Processes: Pythia Version 4.8, Comput. Phys. Commun. 46 (1987) 43.
[107] T.Sjostrand, The Lund Monte Carlo For Jet Fragmentation And E+ E Physics: Jetset Version 6.2, Comput. Phys. Commun. 39 (1986) 347.
[108] X.N.Wang and M.Gyulassy, Gluon shadowing and jet quenching in A+A collisions at √ s = 200 AGeV, Phys. Rev. Lett. 68 (1992) 1480.
[109] S.Radomski and P.Foka, GeVSim MonteCarlo Event Generator, ALICE Note (Gesellschaft fur Schwerionenforschung, Darmstadt) (2002) .
[110] GeVSim and the Flow AfterBurner, http://radomski.web.cern.ch/radomski/. [111] R.L.Ray and R.S.Longacre, MEVSIM: A Monte Carlo Event Generator for STAR,
LANL eprint nuclex/0008009 (2000) .
[112] LCG, Computing Grid Project, http://lcg.web.cern.ch/LCG/. [113] P.Saiz et al. for the ALICE Collaboration, AliEn  ALICE environment on the GRID,
Nucl. Instr. and Methods A02 (2003) 437.
[114] AliEn, a lightweight Grid framework for ALICE, http://alien.cern.ch. [115] MonALISA Repository for ALICE, http://pcalimonitor.cern.ch/. [116] P.Billoir, Track fitting with multiple scattering, Nucl. Instrum. and Methods A225
(1984) 352.
[117] R.Fruhwirth, Application of Kalman filtering to track and vertex fitting, Nucl. In strum. and Methods A262 (1987) 444.
[118] M.AguilarBenitez, Inclusive particle production in 400 GeV pp interactions, Z. Phys. C503 (1991) 405.
[119] G.D’Agostini, Probability and Measurement Uncertainty in Physics  a Bayesian Primer, arxiv:hepph/9512295, 1995.
[120] G.D’Agostini, Bayesian Inference in Processing Experimental Data: Principles and Basic Applications, arxiv:physics/0304102, 2003.
[121] M.Botje, Introduction to Bayesian Inference, Nikhef internal notes (2006) .
128 BIBLIOGRAPHY
[122] C.Zampolli, Eventbyevent fluctuation studies in the ALICE experiment, Eur. Phys. J. C49 (2007) 309.
[123] A.M.Poskanzer and S.A.Voloshin, Methods for analysing anisotropic flow in relati vistic nuclear collision, Phys. Lett. C58 (1998) 1673.
[124] P.F.Kolb, v4: A small, but sensitive observable for heavy ion collisions, Phys. Rev. C68 (2003) 031902.
[125] P.Danielewicz and G.Odyniec, Transverse Momentum Analysis of Collective Motion in Relativistic Nuclear Collisions, Phys. Lett. B157 (1985) 146.
[126] R.J.M.Snellings et al., Novel rapidity dependence of directed flow in high energy heavy ion collisions, Phys. Rev. Lett. 84 (2000) 2803.
[127] P.Danielewicz, Effects of Compression and Collective Expansion on Particle Emis sion from Central HeavyIon Reactions, Phys. Rev. C51 (1995) 716.
[128] S.A.Voloshin, Anisotropic flow, Nucl. Phys. A715 (2002) 379. [129] The STAR experiment at RHIC, http://www.star.bnl.gov/STAR/comp/.
[130] P.Hristov and F. Carminati, The ALICE Offline Bible (Version 0.0), 2007. [131] The AliFlow package, http://www.phys.uu.nl/ simili/FloWeb/.
[132] S.Wang et al., Measurement of collective flow in heavyion collisions using particle pair correlations, Phys. Rev. C44 (1991) 1091.
[133] N.Borghini, P.M.Dinh, and J.Y.Ollitrault, A new method for measuring azimuthal distributions in nucleusnucleus collisions, Phys. Rev. C63 (2001) 054906.
[134] R.S.Bhalerao, N.Borghini, and J.Y.Ollitrault, Analysis of anisotropic flow with Lee Yan Zeroes, Nucl. Phys. A727 (2003) 373.
[135] K.Adcox et al. (PHENIX Collaboration), Flow Measurements via TwoParticle Azi muthal Correlations in Au+Au Collisions at √sNN = 130 GeV, Phys. Rev. Lett. 89 (2002) 212301.
[136] N.Borghini, P.M.Dinh, and J.Y.Ollitrault, Flow analysis from multiparticle azimuthal correlations, Phys. Rev. C64 (2001) 054901.
[137] N.Borghini, P.M.Dinh, and J.Y.Ollitrault, Flow analysis from cumulants: a practical guide, Proceedings of the International Workshop on the Physics of the QuarkGluon Plasma, Palaiseau, France, 47 Sept. 01 (2001) .
[138] Y.Bai (for the STAR Collaboration), The anisotropic flow coefficients v2 and v4 in Au+Au collisions at RHIC, arXiv:nuclex/0701044, 2007.
[139] R.J.M.Snellings for the STAR collaboration, Elliptic flow measurements from STAR, Heavy Ion Phys. 21 (2004) 237.
BIBLIOGRAPHY 129
[140] R.J.M.Snellings, Elliptic flow in Au+Au collisions at √sNN = 130 GeV, Nucl. Phys. A698 (2002) 193.
[141] STAR collaboration, Elliptic Flow from two and fourparticle correlation in Au+Au collision at √sNN = 130 GeV, Phys. Rev. C66 (2002) 034904.
[142] N.Borghini, R.S.Bhalerao, and J.Y.Ollitrault, Anisotropic flow from LeeYang zeroes: a practical guide, J. Phys. G30 (2003) S1213.
[143] N.Kolk, A.Bilandzic, J.Ollitrault, and R.Snellings, Eventplane flow analysis without nonflow effects, arXiv:0801.3915 [nuclex], 2008.
[144] S.Wheaton and J.Cleymans, THERMUS: A Thermal Model Package for ROOT, arxiv:hepph/0407174, 2004.
130 BIBLIOGRAPHY
Summary
This thesis presents a study of elliptic flow in leadlead collisions, in the context of ALICE (A Large Ion Collider Experiment), a dedicated heavy ion detector installed at the Large Hadron Collider (LHC) at CERN.
In a noncentral collision, the term ‘anisotropic flow’ refers to the azimuthal anisotropy in the momenta distribution of the emitted particles, which is usually quantified by a Fourier expansion of the d3N/d~p distribution along the direction of the ‘reaction plane’ (the plane spanned by the impact parameter and the beampipe). Elliptic flow, the second coefficient of this expansion, is denoted as v2.
In the current understanding, v2 is a key observable to study the thermodynamic properties and the Equation of State of the system created in the early stage of the collision, where the formation of the Quark Gluon Plasma (QGP) is expected: the final momentum anisotropy can be connected to the spatial eccentricity of the initial state by assuming that the constituents are strongly coupled and the system behaves as a relativistic fluid. The magnitude of v2 with respect to the eccentricity of the collision measures the strength of this coupling.
Unfortunately, this thesis was developed in a period when LHC was not yet ope rational, and therefore the work was devoted to the implementation of experimen tally driven predictions of the main observables in PbPb collisions at LHC energy, and the development of analysis tools to be used in the ALICE environment. The thesis also shows a full example of flow analysis on simulated heavy ion data, and points out the main sources of experimental uncertainties.
The expected values of elliptic flow and charged multiplicity have been extrapo lated, for PbPb collision at √sNN = 5.5 TeV, in two independent ways (the Low Density Limit approximation and the Relativistic Hydrodynamic model) producing different impact parameter dependences of the elliptic flow. These predictions have been used as an input for simulations in the ALICE offline framework, to develop and test a flow analysis code. The analysis algorithm is based on the event plane method, already successfully used for flow studies in other heavy ion experiments at lower energy, such as the Relativistic Heavy Ion Collider (RHIC) in Brookhaven, and the NA49 experiment at the Super Proton Synchrotron (SPS) at CERN.
One of the biggest experimental uncertainties in measuring flow at LHC is the magnitude of nonflow effects, i.e. azimuthal correlations between collision pro ducts not due to collective flow, and therefore not correlated with the reaction plane. Depending on the analysis method, nonflow effects can introduce a large systema
132 Summary
tic error in the flow measurement. Nonflow effects have been simulated using Hijing, a heavyion event generator which implements all known physics effects from a superposition of protonproton collisions. Comparison between the expected magnitude of elliptic flow and the estimated magnitude of nonflow contributions defines the applicability of the Event Plane analysis. The study also shows that non flow effects are less important when the genuine flow or the multiplicity are large, leading to the conclusion that only peripheral reactions are heavily affected by non flow. The event plane analysis, however, cannot completely disentangle genuine collective flow from nonflow effects, and therefore other methods should be also used (e.g. the Cumulants or the LeeYan zero methods).
A large systematic error in the calculation of the integrated v2 is related to the uncertainty on the reconstruction efficiency, due to the accuracy of the input and to the event selection. In particular, a better parametrization of the particle ratios (possibly modeled on experimental data) should be implemented in the simulations, and multiplicity dependent correction factors should be used.
However, the analysis shows that the input values of the simulations can be re constructed within an accuracy of a few %, leading to the conclusion that the ALICE experiment is an optimal environment to measure elliptic flow, and that the event plane analysis provides an easy and straightforward procedure to perform the mea surement on a wide range of centralities and therefore it can be perfectly used to perform ‘firstday’ physics analysis at ALICE.
Samenvatting
In dit proefschrift wordt elliptische stroming van deeltjes in loodlood botsingen bestudeerd met behulp van ALICE (A Large Ion Collider Experiment), een gea vanceerde zware ionen detector die geïnstalleerd is in de Large Hadron Collider (LHC) op het CERN.
In een nietcentrale botsing refereert de term anisotrope deeltjes stroom naar de anisotrope hoekverdeling in de impulsverdeling van de uitgezonden deeltjes. Over het algemeen wordt deze gekwantificeerd door de Fourierreeks ontwikkel ing van de d3N/d~p verdeling evenwijdig aan het ‘reactievlak’ te nemen (het vlak dat opgespannen wordt door de impactparameter en de bundelrichting). Elliptische stroming, de tweede component van de ontwikkeling, wordt genoteerd als v2. Vol gens de huidige opvattingen is v2 een sleutel observabele voor het bestuderen van de thermodynamische eigenschappen en toestandsvergelijking van een systeem dat zich instelt vlak na de botsing, wanneer het ontstaan van een Quark Gluon Plasma verwacht wordt: de uiteindelijke impuls anisotropie kan verbonden worden aan de ruimtelijke excentriciteit van de beginfase, door aan te nemen dat de relevante vrij heidsgraden sterk gekoppeld zijn en dat het systeem zich gedraagt als een relativis tische vloeistof, en dat de grootte van v2 ten opzichte van de excentriciteit van de botsing de sterkte van de koppeling representeert.
Helaas werd dit proefschrift vervaardigd in de periode dat de LHC nog niet in bedrijf was, als gevolg daarvan is het werk toegespitst op het implementeren van experimenteel gedreven voorspellingen van de belangrijkste observabelen in lood lood botsingen bij LHC energieën en de ontwikkeling van de analyse gereedschap pen die gebruikt moeten worden in de ALICE omgeving. Dit proefschrift bevat ook een volledig voorbeeld van de strominganalyse uit gesimuleerde zware ionen data en laat de belangrijkste bronnen van experimentele onzekerheden zien.
De verwachte waarden van de elliptische stroming en multipliciteit van de ge laden deeltjes zijn doorgerekend voor loodlood botsingen met √sNN = 5.5 TeV op twee verschillende manieren (de lage dichtheidslimiet benadering en het rela tivistische hydrodynamische model), dit resulteert in verschillende afhankelijkhe den van de impactparameter van de elliptische deeltjes stroom. Deze voorspellin gen zijn gebruikt als invoer voor de simulaties in het ALICE offline raamwerk om de analyse code te ontwikkelen en te testen. Het analyse algoritme, gebaseerd op de reactievlakmethode, is al succesvol gebruikt voor stromingstudies bij andere zware ionen experimenten met lagere energieën zoals bij de Relativistic Heavy Ion
134 Samenvatting
Collider (RHIC) in Brookhaven en bij het NA49 experiment in de Super Proton Synchrotron (SPS) op het CERN.
Een van de grootste experimentele onzekerheden in het meten van de deelt jes stroom in de LHC is de grootte van de effecten die niet het resultaat zijn van nietstroming, zoals hoekcorrelaties tussen botsingsproducten door niet collectieve stroming die daardoor niet gecorreleerd zijn met het reactievlak. Afhankelijk van de analyse methode, kunnen de nietstromings effecten een grote systematische fout in de stromingsmeting veroorzaken. Nietstromings effecten zijn gesimuleerd met behulp van Hijing, een zware ionen botsingsgenerator waarin alle bekende fysische effecten van protonproton botsingen geïmplementeerd zijn. De verge lijking tussen de verwachte grootte van de elliptische stroming en de verwachte grootte van nietstromingseffecten definieert de toepasbaarheid van de reactievlak methode. Het onderzoek laat ook zien dat nietstromingseffecten minder belangrijk zijn wanneer werkelijke stroming of de multipliciteit groot zijn, daar uit volgt de dat alleen scherende botsingen zwaar onderhevig zijn aan nietstromingseffecten. De reactievlakmethode kan echter niet gebruikt worden om de nietstromingseffecten en de stromingseffecten compleet te isoleren, daarom zullen er ook andere metho den gebruikt moeten worden (zoals de Cumulante of Lee Yang zero methode).
Een grote systematische fout in de berekening van de geïntegreerde v2 is gerela teerd aan de onzekerheid van de reconstructieefficiëntie, afhankelijk van nauw keurigheid van de data invoer en de botsingsselectie. In het bijzonder zal een betere parametrisatie van de deeltjes verhoudingen (mogelijk gebaseerd op experi mentele data) geïmplementeerd moeten worden in de simulaties en de multipliciteits afhankelijke correctie factoren zullen moeten worden gebruikt.
De analyse laat echter zien dat de invoer waarden van de simulaties gerecon strueerd kunnen worden binnen een marge van een paar procent, dit leidt tot de conclusie dat het ALICE experiment een optimale omgeving is voor de meting van elliptische stroming en dat de reactievlakmethode een makkelijke en inzichtelijke procedure verstrekt om de meting in groot bereik van centraliteiten te bewerkstel ligen, deze methode kan daarom perfect gebruikt worden in een eerste fysische analyse met behulp van ALICE.
Acknowledgements
When I finished my ‘Laurea’ thesis, in the far 2002, I was tripping about the po tentials of our science to disclose a deeper level of understanding reality. The en thusiasm of my first experience in highenergy physics made me look for a Ph.D. position, which I luckily found on ALICE. For an amazing coincidence Alice was also the name of my girlfriend at that time, as well as the girl in my favorite fairy tale. Almost five years passed, my knowledge and technical skills improved, but more than once I had the impression that I was rolling down the rabbit hole and losing myself into the deepness that fascinated me so much. What was I looking for, again?
Fortunately, the story has a happy ending. However, I would not have succeeded without the help of a few people, to which I want to address special thanks. First of all, thanks to Rene, his determination and his patience have kindly kicked me toward the end. To Raimond, for his clever and precious advises and his positive attitude to talk about anything, physics related or not, at any time. Thanks to Thomas, for hav ing accepted me in the SubAtomic Physics group in Utrecht, and for having trusted me once more by ‘extending’ my seemingly unreachable deadline. To Nick for his essential help in getting started with the ALICE framework, and for keeping alive the tradition of the ‘Physics Colloquium’ (and drinks). Thanks to Andrea (Cky) for the artistic cover of this book, to Wilko for the dutch ‘samenvatting’, to Cristian for his PhotoShop skills, to Marco for his help with the short summary, which allowed me to have a defense date. And, of course, thanks to Paul, for driving me back from NIKHEF so many times, and for the long discussions about my meaningless plots, sometimes going on for the whole trip.
Thanks also to the rest of the group, GertJan, for assigning me some of the most intriguing task I have done during my studies, such as the Van de Graaff experiment and the HISPARC project; Kees, for his suggestions about not buying such a pro blematic 64 bits laptop (which I did not listen to); Ton and Arie, to have involved me in the assembling of the ITS, and to have trusted my movie editing skills; Rene (the young one), for fixing my problematic laptop, twice.
Thanks to the other Ph.D.s who have been more or less contemporary with me, Alexey, Yuting, Sasha and Martjin (who are done), and Federica, Cristian (again), Ermes and Marek (still on their way), for many fruitful discussions and few social activities. To my first office mates who finished long before me, Garmt and Hernan, for providing a living example of Ph.D. in its terminal state. At that time I could not
136 Acknowledgements
understand the pain you were going through. For ‘par condicio’ let’s also thank Phanos, Ingrid, André, Michiel, Mikolaj,
Naomi, Ante, and the younger students, Despoina, Minko (gone), Marta, Pédzi, Merijn and Wilko (again). Last but not least, thanks to Astrid for her precious help with my integration in the Dutch environment. I hope I did not forget anyone.
Who else? Thanks to the Grid people at NIKHEF, Jeff, Ronald and David, for allowing my bugged simulations into their supercomputer. To the Torino’s group, Luciano, Francesco, Massimo, Chiara, etc. for the useful mail exchanges, and for a few amusing dinners around CERN and Shanghai. To the CERN people, in particular to Federico, Peter, Marian and Youri, for such an indefatigable devoutness to ALICE and for providing an essential helpdesk to the entire collaboration.
Thanks to the Don Gauderio fellows from the Latin American Summer School in Malargue, Teresa, Clementina, Eduardo, Felix, Michele, etc. for two unforget table weeks in the name of physics and tequila. To Antonello for the first LATEX lay out of this thesis, and to Joana, for having been the cutest officemate ever, and (hopefully) for offering me a job.
Thanks to my family, for always being a moral support, especially in my ‘down’ periods. And finally, thanks to all my friends, and girlfriends, and all the people that have been part of my Dutch life for a variable amount of time. To Stefano, for having been so brave to follow me down from a perfectly working airplane, probably I would never have started skydiving alone, and to José, for joining us as soon as he had the chance. ‘Blue sky!’, you guys. To Claudietta, for having been my personal movie star. To Eri, for being the most active partyguy ever, always HardCore! To Sandra, the Mick O’Connell will never be the same again without you sitting at the entrance. To Vanessa, for having been a bearable flatmate for so long in the peaceful Lunetten. It was nice to share such a ‘gezellig’ experience with all of you since the very beginning, I hope we will always stay in touch.
Thanks to Alessia for her unconditional happiness and her positive radiation, to Yuria for her tropical sweetness, to Pimwipa for her endurance in filling the cultural gap. Thanks to Anna for her personality and for the best barbecue place in Utrecht. To Laura for the pitstops at (once upon a time) Biltstraat 81.
Thanks to Francesco, Neile and all the ‘Giant Wombats Killed My Grandma’, you were the perfect soundtrack for one of the best periods of my life, and I am so sorry that you never became RockStars, as you used to sing. Thanks also to Gabriel, Arnaud and all the ‘New Acquisition’, not exactly my music but you still kick ass. Keep on playing.
So many people, and places, and things to do. While the rabbit hole is explored down to hell by the most clever people on earth, I wish to conclude here my trip. And home we steer, a merry crew, beneath the setting sun.
Comments