A Search for Long-Lived Charged Particles
in Cosmic Rays
BY
MARIO CAMUYRANOM.S. (University of Illinois at Chicago, United States) 2008
THESIS
Submitted in partial fulfillment of the requirementsfor the degree of Doctor of Philosophy in Physics
in the Graduate College of theUniversity of Illinois at Chicago, 2012
Chicago, Illinois
Defense Committee:
Mark Adams, Chair and AdvisorNikos VarelasDavid HofmanCecilia GerberDiego Casa, Argonne National Laboratory
Copyright by
Mario Camuyrano
2012
To Laura Virginia ... and
to the ones that left too early, with too many things to do yet.
And the ones that take too long to do anything.
iii
ACKNOWLEDGMENT
I want to thank Laura Virginia and all our big family, my mother Rosario, mother in
law Maria Laura, father Mario, father in law Dardo, grandmother Cristina and grandfather
Salvador, sister Maria Victoria, brother and sister in law Rodolfo and Mariana, nieces Victoria
and Virginia, nephew and nieces Julian, Malena and Felicitas. Uncle and aunts, Bambina,
Marcela, Pascual, Alicia and cousins Gabriela, Martin, Santiago, Agustin, Diego and Claudia...
for all their unconditional love and support.
I want to thank Mark for all this time and dedication and to transmitting me his passion for
astronomy (this is not an astronomy thesis though). Also I am grateful with all the UIC HEP
people for listening with interest to all my talks. All the committee members for their time.
Cecilia for all her advises and interest, Nikos for his interest in the subject and all his questions
and ideas. Dave for all his help, advise and great support, and Diego for his generosity always.
I want to thank Victor to introduce me to the CLs world. And Wade Fisher for his time.
I want to thank very specially all my good friends here and in Argentina. Specially Gustavo
and Julia who invited me to UIC and facilitated me everything, including housing. The friends
that are always there: Diego, Natalia, Jaesung, Tim, Ahmet, Francisco, Sankar, Nico, Carlos,
Jeronimo, Daniel, Alejandro, Mastro, Pablo, Richard, Ricardo, Nestor, Gabriel, Ariel, German,
Cosme, Emilio, Cora, Alejo, Miro, Edu, Tefi, Moni, Miri, Robert, James, Gustavo, Ruben.
Melodie, Luis, Derrick and the friends I forget to mention.
I want to thank all that take the time to read this thesis.
iv
TABLE OF CONTENTS
CHAPTER PAGE
1 INTRODUCTION . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.2 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.3 Cosmic rays . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41.4 Muon lifetime . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51.4.1 Muon lifetime in matter . . . . . . . . . . . . . . . . . . . . . . . 51.5 Long-lived particles searches . . . . . . . . . . . . . . . . . . . . 111.6 Thesis outline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2 EXPERIMENTAL PROCEDURE . . . . . . . . . . . . . . . . . . . . . 152.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152.2 Experiments setup . . . . . . . . . . . . . . . . . . . . . . . . . . 18
3 DATA ANALYSIS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263.1 General procedure . . . . . . . . . . . . . . . . . . . . . . . . . . 263.1.1 Trigger definition . . . . . . . . . . . . . . . . . . . . . . . . . . . 273.1.2 Processing the raw data . . . . . . . . . . . . . . . . . . . . . . . 343.2 Hardware studies . . . . . . . . . . . . . . . . . . . . . . . . . . . 403.3 Detector studies . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
4 RESULTS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 484.1 Muon lifetime measurement . . . . . . . . . . . . . . . . . . . . 484.1.1 Muon capture rate . . . . . . . . . . . . . . . . . . . . . . . . . . 494.1.2 Muon lifetime East-West asymmetry . . . . . . . . . . . . . . . 524.2 Setting limits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 594.2.1 Cleaning the data sample . . . . . . . . . . . . . . . . . . . . . . 594.2.2 Lifetime spectrum comparison . . . . . . . . . . . . . . . . . . . 634.2.2.1 Muon decays . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 644.2.2.2 Muon decays within large EAS . . . . . . . . . . . . . . . . . . . 684.2.3 Negative log-likelihood implementation . . . . . . . . . . . . . . 754.3 Results for SB and S1’ samples . . . . . . . . . . . . . . . . . . 844.3.1 Systematic errors estimation . . . . . . . . . . . . . . . . . . . . 954.4 Results with extensive air shower S tag and J tag . . . . . . . 98
5 EXTENDED DECAY TIME SEARCH . . . . . . . . . . . . . . . . . 1005.1 Background estimation . . . . . . . . . . . . . . . . . . . . . . . 1025.2 Signal plus background estimation . . . . . . . . . . . . . . . . 110
v
TABLE OF CONTENTS (Continued)
CHAPTER PAGE
5.2.1 Systematic errors estimation . . . . . . . . . . . . . . . . . . . . 1115.3 Confidence limits determination applied to very long lifetimes 111
6 CONCLUSIONS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
APPENDICES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115Appendix A . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116Appendix B . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125Appendix C . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148
CITED LITERATURE . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152
vi
LIST OF TABLES
TABLE PAGE
I MUON ENERGY LOSS IN AIR . . . . . . . . . . . . . . . . . . . . . 9
II EXPERIMENT RATE COMPARISON . . . . . . . . . . . . . . . 23
III AVERAGE LIFETIME EAST-WEST ASYMMETRY . . . . . . . . 58
IV MUON DECAYS AND BACKGROUND COMPARISON . . . . . . 67
V NUMBER OF MUON DECAYS COMPARISON . . . . . . . . . . . 74
VI BACKGROUND 2σ-FLUCTUATION IN MUON LIFETIME . . . . 97
VII EXPERIMENT EVOLUTION RATE COMPARISON . . . . . 123
VIII FIT SELF CONSISTENCY TEST . . . . . . . . . . . . . . . . . . . . 147
vii
LIST OF FIGURES
FIGURE PAGE
1 Elementary particles in the Standard Model . . . . . . . . . . . . . . . . 3
2 Content of the universe . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
3 Primary cosmic radiation composition . . . . . . . . . . . . . . . . . . . . 6
4 Estimated vertical fluxes of particles in cosmic ray air showers . . . . . 7
5 Spectra of atmospheric muon on the ground . . . . . . . . . . . . . . . . 8
6 Electron capture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
7 Experimental muon charge ratio . . . . . . . . . . . . . . . . . . . . . . . . 10
8 Montecarlo simulation of a long-lived charged particle decay. . . . . . . 17
9 W-T array setup sketch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
10 Experimental layout . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
11 Synchronization of CRs through DAQs with different GPSs . . . . . . . 28
12 Time distribution of W-T across DAQs . . . . . . . . . . . . . . . . . . . 29
13 Signal, raw data and reconstructed data . . . . . . . . . . . . . . . . . . . 30
14 Detector calibration plateaus . . . . . . . . . . . . . . . . . . . . . . . . . . 32
15 Time distributions for W-T counters . . . . . . . . . . . . . . . . . . . . . 33
16 Data flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
17 S1’ 16 channel output and event display . . . . . . . . . . . . . . . . . . . 37
18 Montecarlo simulation for randoms in a simulated long-lived chargedparticle. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
viii
LIST OF FIGURES (Continued)
FIGURE PAGE
19 Event time calculation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
20 DAQ 5000 frequency correction . . . . . . . . . . . . . . . . . . . . . . . . 42
21 DAQ 6000 PPS delay respect to the GPS latched time . . . . . . . . . . 43
22 Muon lifetime turn on . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
23 Trigger bias ”hump” in the muon lifetime . . . . . . . . . . . . . . . . . . 47
24 Muon capture rate fit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
25 Muon East-West asymmetry . . . . . . . . . . . . . . . . . . . . . . . . . . 54
26 Muon East-West flux estimation . . . . . . . . . . . . . . . . . . . . . . . . 56
27 S1’ and SB 1-hour rates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
28 S1’ and SB rates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
29 Sources of fluctuation of muon rates at sea level . . . . . . . . . . . . . . 62
30 Montecarlo simulation of a long-lived charged particle. . . . . . . . . . . 63
31 Lifetime spectrum comparison . . . . . . . . . . . . . . . . . . . . . . . . . 65
32 Lifetime spectrum comparison . . . . . . . . . . . . . . . . . . . . . . . . . 66
33 Lifetime EAS S-tag, no veto . . . . . . . . . . . . . . . . . . . . . . . . . . 69
34 Lifetime EAS S-tag, no veto . . . . . . . . . . . . . . . . . . . . . . . . . . 70
35 Lifetime EAS S-tag, with veto . . . . . . . . . . . . . . . . . . . . . . . . . 71
36 S1’ and SB lifetime fits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
37 NLLR 95% Confidence Limits . . . . . . . . . . . . . . . . . . . . . . . . . 80
38 NLLR 95% Confidence Limits example . . . . . . . . . . . . . . . . . . . . 81
39 NLLR signal selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
ix
LIST OF FIGURES (Continued)
FIGURE PAGE
40 Lifetime background simulation . . . . . . . . . . . . . . . . . . . . . . . . 86
41 CHAMP simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
42 95% Confidence Limits for SB sample . . . . . . . . . . . . . . . . . . . . 89
43 95% Confidence Limits for S1’ and SB sample . . . . . . . . . . . . . . . 91
44 95% Confidence Limits for SB and S1’ sample . . . . . . . . . . . . . . . 92
45 95% Confidence Limits for S1’ and SB sample . . . . . . . . . . . . . . . 94
46 W clock frequency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
47 95% Confidence Limits for large EAS . . . . . . . . . . . . . . . . . . . . . 99
48 Extended-time possible trajectories . . . . . . . . . . . . . . . . . . . . . . 101
49 Background estimation of simulation . . . . . . . . . . . . . . . . . . . . . 106
50 Data, simulation and estimation for background . . . . . . . . . . . . . . 107
51 Data with estimation for background subtracted . . . . . . . . . . . . . . 108
52 Estimation of background for short lifetimes . . . . . . . . . . . . . . . . 109
53 Long lifetime limits for SB . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
54 Long lifetime limits for S1’ . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
55 Experimental evolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
56 Decay simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126
57 Lifetime versus number of bins . . . . . . . . . . . . . . . . . . . . . . . . 128
58 Lifetime uncertainty versus number of bins . . . . . . . . . . . . . . . . . 130
59 Modified Stabilized LS fit versus number o bins . . . . . . . . . . . . . . 131
60 Modify fit uncertainty vs. number of pseudo-experiments . . . . . . . . . 134
x
LIST OF FIGURES (Continued)
FIGURE PAGE
61 Simulation of decay . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
62 A set of simulated lifetime experiments . . . . . . . . . . . . . . . . . . . 137
63 Standard deviation comparison . . . . . . . . . . . . . . . . . . . . . . . . 138
64 Weights for randomly distributed Poisson data . . . . . . . . . . . . . . . 140
65 Fit bias in a lifetime measurement . . . . . . . . . . . . . . . . . . . . . . 141
66 Fit bias method comparison for 100 pseudo-experiments . . . . . . . . . 143
67 Fit bias method comparison with increasing number of pseudo-experiments 144
68 Fit bias with reduced statistics . . . . . . . . . . . . . . . . . . . . . . . . 146
xi
LIST OF ABBREVIATIONS
CHAMP Charged Massive Particle
CMSP Charged Massive Stable Particle
CR Cosmic Ray
EAS Extensive Air Shower
NLLR Negative Log-Likelihood Ratio
PDF Probability Density Function
PE Pseudo Experiment
PMT Photomultiplier Tube
PPS Pulse Per Second
DAQ Data Acquisition
GPS Global Positioning System
TMC Time Measurement Chip
TOF Time of Flight
UTC Coordinated Universal Time
WIMP Weakly Interacting Massive Particle
xii
SUMMARY
A search for a new long-lived elementary particle produced in cosmic ray air showers is
presented. This measurement is based on data recorded from November 2009 to December
2010 in a plastic scintillation detector developed and housed at the University of Illinois at
Chicago. No evidence of a new particle was observed in the decay spectrum of stopped charged
particles. A 95% confidence level upper limits of production over a broad range of lifetime
hypotheses from 5µs to 0.3s, are given relative to the observed muon decay rate. In the most
sensitive region around a lifetime of 30µs, new particle production is excluded above 2 · 10−4 of
the stopping muon decay rate.
xiii
CHAPTER 1
INTRODUCTION
The objective of this thesis is to search for a new fundamental long-lived charged particle,
that may show evidence of super symmetry (SUSY), or may be related to dark matter (DM)
in the universe.
1.1 Motivation
At the present time the only evidence of DM is given by cosmological macroscopic phe-
nomenon (1; 2). The DM has to be neutral, however, a long-lived charged massive particle
(CHAMP) that decays eventually into DM (and standard model particles) may also explain
the abundances of light elements produced during big bang nucleosynthesis (BBN) (3; 4; 5)1
1.2 Background
If elementary particles are the source of DM, the existence of a particle beyond the scope
of the Standard Model (SM) is required.
The Standard Model of elementary particle physics (6; 7) describes the fundamental com-
ponents of matter as well as their interactions via the strong, electromagnetic and weak forces.
Matter, at low energy, is almost exclusively composed of protons and neutrons (baryons), as
well as electrons (leptons). Protons and neutrons (hadrons) are not fundamental, but are them-
1The long-lived CHAMP may constitute a bound state with a light element and the formation of suchbound states may modify nuclear reaction rates in BBN and eventually change light element abundancesas predicted from the standard cosmological model.
1
2
selves composed of up and down quarks which are held together by the carriers of the strong
interaction (gluons). Baryons interact through strong, electromagnetic and weak forces while
leptons only feel the electromagnetic and weak interactions. At low energies the weak interac-
tion appears in nuclear decays that also produce another kind of lepton, very light and neutral,
called a neutrino. At higher energies we can observe four more quarks, strong mediator parti-
cles called gluons, two more leptons (muons and taus) and their neutrinos, the weak interaction
mediators (Z and Ws) and the electromagnetic mediator (photon) as shown in Figure 1. That
should describe all matter in the universe but it does not. It only takes into account about 17%
of the matter 1 of the universe as shown in Figure 22.
Dark matter comprises 23% of the mass-energy content of the observable universe. This
matter, different from atoms, does not emit or absorb light. It has only been detected indirectly
via its gravitational effect. 72% of the universe, is composed of ”dark energy”, that acts as a
sort of an anti-gravity. This energy, distinct from dark matter, is responsible for the present-day
acceleration of the universal expansion.
The search described in this thesis is for a particle that is beyond the description of the SM;
more specifically a long-lived Charged Massive Particle (CHAMP) produced in a collision of a
cosmic ray with the upper atmosphere. The CHAMP lives long enough to reach our detectors
1Responsible for the gravitational interaction in the universe
2Source: Wilkinson Microwave Anisotropy Probe (WMAP) and NASA Science Team. WMAP datais accurate to two significant digits, so the total of these numbers is not 100%. This reflects the currentlimits of WMAP’s ability to define Dark Matter and Dark Energy.
3
Figure 1. Elementary particles in the Standard Model.
Figure 2. Content of the universe: Atoms includes all SM matter, stars, intergalactic gas, etc.
4
at sea level, stops and then decays into another charged particle. The characteristics of such a
particle are: it is charged so it leaves a signal in our detectors; it does not feel the strong force,
or it would not be able to penetrate the atmosphere; it probably decays via the weak force,
thus giving it a long lifetime relative to other particles.
1.3 Cosmic rays
Cosmic rays provide enough energy to produce this new massive state. We are not concerned
with the details of cosmic rays, however, their source is thought to be from supernovae and
active galactic nuclei (8). Cosmic rays are a mix of protons and heavier nuclei, but consist
mainly of protons Figure 3. Many particles are produced in these high energy air showers,
but most interact so strongly with the atmosphere that they never reach the surface. The
flux at the surface is dominated by muons as shown in Figure 41. Muons are produced in
decays of the hadrons in the shower, about 10 to 20 km above sea level and lose 2 to 4 GeV to
ionization before reaching the ground, depending on energy (9) as shown in Table I2(10). Their
energy and angular distribution reflect a convolution of the production spectrum, energy loss
in the atmosphere, and decay (neglecting deflection by the Earth’s magnetic field). The mean
energy of muons at the ground O(GeV) as the latest results from BESS experiment (11) show
1The intensity of primary nucleons in the energy range from several GeV to somewhat beyond 100TeV is given approximately by IN(E) ≈ 1.8× 104(E/1GeV)2.7nucleons ·m−2sr−1GeV−1s−1. Where Eis the energy-per-nucleon (including rest mass energy)
2In the Table I the Continous Slowing-Down Approximation (DSDA) is a very close approximationto the average path length traveled by a charged particle as it slows down to rest. In this approximation,the rate of energy loss at every point along the track is assumed to be equal to the total stopping power.Energy-loss fluctuations are neglected. The CSDA range is obtained by integrating the reciprocal of thetotal stopping power with respect to energy (10).
5
in Figure 5. Neutrinos are also produced but interact so weakly that most pass through the
Earth.
1.4 Muon lifetime
Muons are unstable particles, approximately 208 times more massive than electrons and
decay after approximately 2.2µs into neutrinos and an electron or positron.
µ− → e− + νe + νµ
µ+ → e+ + νe + νµ
We search for a particle that behaves like the muon, but with a longer lifetime. In that way
muons are the background and serve as well as the normalization for our experiment.
1.4.1 Muon lifetime in matter
When muons decay in the proximity of matter their lifetime is affected. The reason for
that is that there is a chance for negative muons to be captured by the nucleus. This weak
interaction or decay usually happens between electrons and protons (more precisely an ”up”
quark in the proton, see Figure 6).
Regular electron capture p+ + e− → n + νe
Muon capture p+ + µ− → n + νµ
Since we are measuring the lifetime of resting muons, the longer they take to decay, the
larger the probability they will be captured; thus the muon capture rate competes with the
lifetime decay giving an appreciable reduction on the value. An extensive measurement of
capture rates was performed with different nuclei at TRIUNF experiment in the late 1980s
6
Figure 3. Primary cosmic radiation composition: Flux versus energy per nucleus (10,
chapter 24)
7
15 10 5 3 2 1 0
0 200 400 600 800 10000.01
0.1
1
10
100
1000
10000
Atmospheric depth [g cm–2]
Vert
ica
l fl
ux
[m
–2 s
–1 s
r–1]
Altitude (km)
µ+ + µ−
π+ + π−
e+ + e−
p + n
νµ + νµ_
Figure 4. Estimated vertical fluxes particles in of cosmic ray air showers in the atmosphere
with E > 1 GeV . The points show measurements of negative muons with Eµ > 1 GeV (10,
chapter 24).
8
Figure 5. Spectra of atmospheric muon on the ground (11)
↑ t
Figure 6. Electron capture main order.
9
T p bioniz bbrems bpair bnucl
∑bi CSDA range
GeV GeV/c ———————10−6 g−1 cm2——————— g/cm2
0.01 0.047 7.039 7.039 7.862·10−1
0.1 0.176 0.201 2.013 3.501·101
1 1.101 2.020 0.000 0.000 2.021 5.077·102
10 10.1 2.627 0.005 0.005 0.005 2.642 4.204·103
100 100.1 3.014 0.078 0.106 0.042 3.239 3.409·104
1000 1000 3.271 1.025 1.479 0.420 6.195 2.307·105
10000 10000 3.522 11.574 16.369 4.730 36.197 7.636·105
100000 100000 3.801 120.091 167.522 56.971 348.385 1.421·106
TABLE I
MUON ENERGY LOSS IN AIR AND CONTINUOUS SLOWING DOWN
APPROXIMATION (CSDA) PARAMETERS CALCULATED FOR DRY AIR AT 1 ATM.
AIR DENSITY ρ = 1.205 · 10−3g/cm3 WITH CHARGE NUMBER DENSITY
< Z/A >= 0.49919. Where T is the muon kinetic energy, p is the muon momentum,∑
bi ≡ b
coefficient follows −dEdx
= a(E) + b(E)E, and a(E) is the ionization energy loss given by the
Bethe equation.
10
(12). Thus the measurement of the muon vacuum lifetime has to be performed with positive
muons only, and it has been performed with great accuracy τµ = 2.197013(24)µs 1
For CR stopping muons we know that the proportion of µ+ to µ− is approximately 1.1;
there are 10% more µ+ than µ− as shown in Figure 7. Therefore, our measurement of the muon
lifetime is a mix of τµ− and τµ+ with τµ− shortened due to the capture rate. Assuming a charge
ratio of 1.1, we extracted the µ− capture rate of 40.4 ± 1.9kHz (as described in Section 4.1.1)
which is consistent with the capture rate measured for carbon (12C) (12).
Figure 7. Experimental muon charge ratio (14; 11)
1MuLan fix target experiment (13)
11
1.5 Long-lived particles searches
Many searches for long-lived particles (LLP) produced by cosmic rays were carried out in
the late 1960s (15). Experimenters searched for slow particles produced in extensive air showers
as well as for particles with lifetimes longer than microseconds. More specifically, they searched
for particles that arrived 20ns to 400ns later than the core of the shower. New limits were set by
similar measurement repeated in 1980s (16). Another experiment (17) showed some evidence
of a 60ns delayed particle that may correspond to a 1 to 3 microsecond lifetime but was never
confirmed.
In the late 1970s and 1980s CERN also performed experiments searching for long lived
strong interacting particles looking for evidence of SUSY (18) as well as heavy leptons (19).
Recently several searches were performed at Fermilab looking for long-lived neutral and
charged particles (20; 21). The D0 Collaboration has carried out a search for neutral particles
that have a long lifetime and decay to two muons plus missing transverse energy (22). Also,
D0 set limits for stopping neutral particles with lifetimes from 30µs to 100h (23)
A massive charged particle that lives long enough to pass through the detector, will leave
behind a greater ionization trail (Bethe-Bloch formula). The particle will appear as a slow
moving, heavy ionizing muon (24).
CDF and D0 have searched for signatures of CHAMP particles inside their detectors.
CHAMPs are expected to be slow moving and are very penetrating, just like slow muons.
They could decay outside the detector if they are long-lived. In these searches, the experiments
look for events that contain µ-like particles that are slow moving. CDF makes use of its Time-
12
of-Flight and tracking detectors to measure the CHAMP candidates’ velocities and momenta,
and then determine its masses. D0 identifies the slow moving µ-like particles based on the
timing measurements recorded in the muon detector. No evidence of CHAMPs is observed in
either experiment (25). CDF searched for CHAMPs using the timing capabilities of its outer
tracker. Events are selected with at least one muon of pT > 20GeV that fired a single muon
trigger and is from the primary vertex. The CHAMP mass studied was M > 100GeV .
Several searches were also performed at the e+e− collider at CERN (LEP) for heavy stable
and long-lived charged particles (SUSY). LEP set constraints on the mass of heavy stable SUSY
leptons (26), and limits on production of long-lived SUSY particles (27).
Recently several searches were proposed and performed at the CERN Large Hadron Collider
(LHC) (28). The search for Stopped long-lived Gluinos in pp Collisions at√
s = 7 TeV is an
example (29). This interesting technique looks for evidence of long-lived particles that stop in
the CMS detector and decay in the quiescent periods between beam crossings. A search interval
corresponded to 62 hours of LHC operation. They set limits for lifetimes from 10µs to 1000s
where no significant excess above background was observed.
The Auger experiment searches are mainly focused on understanding the origin and compo-
sition of very high energy cosmic rays (30). Currently Auger is using muon signal for calibration
purposes (31); they have a technique that allows them to measure the level of the pure water in
their detectors from the charge spectrum of Michel electrons1 from decaying muons (32). But
1The Michel electron is the electron product of the muon decay
13
new improvements are planned for Auger: the AMIGA enhancement project will allow Auger
to lower down their discriminator levels to be able to work with muons (33), it is a new muon
detector system that includes scintillation detectors to improve the counting of the number of
muons in showers.
Our experiment is very small in size compare to these other experiments, but has made
a new contribution to the literature. It is dedicated to measuring the muon lifetime directly
and distinguishing it from backgrounds dominated by the coincidence of random particles. We
search for the presence of another particle with a lifetime larger than the muon. Our detectors
are not overwhelmed with other collisions as in the collider experiments, but can be active for
very long times (up to seconds), waiting for the CHAMP to decay. The cosmic ray spectra
also provides energies above those available at the LHC. We operate separate detectors to tag
events where there are other muons present indicating a high energy incoming CR. Our tagged
events typically come from CR with energies above 100 TeV (34). In the most sensitive LHC
results (29) techniques require massive particles. Our technique of direct lifetme measurement
is sensitive to much smaller mass particles.
1.6 Thesis outline
The work in this thesis describes various analysis techniques developed to detect long-lived
charged particles produced in CR that reach our detector at sea level, by a direct measurement
of their lifetime.
• In Chapter 2 there is a description of the apparatus.
• The event reconstruction and data analysis is presented in Chapter 3,
14
• The results for lifetimes, using decays that occur with 200 µs, and the extraction of upper
limits to CHAMP production is discussed Chapter 4,
• An alternative very long lifetime technique from 20 µs to 0.5 s is described in Chapter 5,
• A summary and conclusion is given in Chapter 6.
• In Appendix B there is a detailed exposition of the fitting technique developed specially
for this analysis.
CHAPTER 2
EXPERIMENTAL PROCEDURE
This research involves measuring the lifetime of stopping particles at (207 ± 10)m1 above
sea level, that have been produced by cosmic rays. The particles are detected in a series of
plastic scintillation counters. The time between the particle entering the last counter and the
decay product signal is defined as its lifetime. The advantage of the technique is that there is
no need to correct for Lorentz boost because the detector is in the particle’s rest frame. This
method to measure the lifetime introduces an acceptance limitation since the particle has to
lose its energy in such a way in the atmosphere that it reaches the detector with almost no
kinetic energy. This implies that the muon’s initial momentum is less than 3 GeV . From a 1000
muons, produced by CR showers that go-through our detector, about 6 stop inside and decay.
2.1 Motivation
The aim of the experiment is to search for long-lived particles produced in cosmic ray
extensive air showers (EAS). These particles must travel from the production point in the upper
atmosphere and stop in our scintillation counter. Therefore, we assume that these particles live
long enough to reach the detector and at the same time, lose energy fast enough to reach the
detector almost at rest (Table I). In addition, their initial momenta have to be large enough to
1Measured with GPS. The detector is located at the Science and Engineering Laboratories at Uni-versity of Illinois at Chicago
15
16
avoid being deflected by the Earth’s magnetic field (i.e. initial momenta order of magnitude of
0.1GeV (35)1).
The detector consists of several plastic scintillation counters, with efficiency above 95% and
fast response for the detection of charge particles (36; 37; 38). It measures the arrival time of
incoming particles, with nanosecond precision, and the duration of the energy deposit signal2:
pulse width. It can distinguish the incoming charged particles that pass through it and the
ones that stop. The going-through particles will have an in-time signal in consecutive counters
and the stopping ones will be missing the signal in the bottom-most counter. This missing
counter defines the veto condition (we will later on refer to it as ”veto”) that distinguishes
stopping particles from the non-stopping ones. After the detector triggers on an incoming
particle, it opens a time window3 that allows the measurement of the signal from any charged
decay product. It is also able to measure the time of flight (TOF) of a approximately 10% of
the incoming particles.
In summary we use the detector to perform the following functions:
• The measurement of the arrival time and pulse width of incoming charged particles that
satisfy the predefined trigger condition.
1That is for a relativistic particle p(MeV/c) = 300 · B · r(T · m) (35), in an uniform and constantmagnetic field of 33, 000nT and a radius of 10km
2The pulse width is directly related to the energy deposit but also depends on the proximity of thehit to the PMT.
3We also refer to it as time gate.
17
• The recognition of particles that stop in the detector.
• The detection of the presence of the decay product, measuring the time between the
incoming stopped particle and its decay signal.
Figure 8 shows a simulation of the decay of a long-lived charged particle (20µs) as we expect
to appear in our data.
Figure 8. Montecarlo simulation of a long-lived charged particle decay: The graph shows
muon decays and a 20µs charged particle (CHAMP) with a decaying rate of 1% of the muon
decay’s rate.
18
The evolution of the apparatus involved a series of experimental configurations (”eras”) that
gradually increased the rate of stopping particles, improved the angular acceptance, and finally
incorporated the track of the decay product (electrons in the case of muon decays).
2.2 Experiments setup
Data used in this analysis were obtained with two different configurations for the detector:
S1’ and SB. Two previous configurations (S2 and S1), using during commissioning are described
at Appendix A. Appendix A includes a comparison among the four eras to show the evolution
of the detector. The detector consists of three arrays connected to four Data Acquisition Cards
(DAQs); in Section 3.1 we give a full description of the readout system.
Counters are labeled d1 to d81 (Figure 9). Overlaid in the space between counters d1 and
d3 in the figure are representations of the counter mapping onto the input channels c1 to c4 for
the T DAQ (yellow) and W DAQ (red). Each DAQ card requires two counters to fire within a
trigger gate to initiate the readout (Section 3.1.1).
Both setups have the capability of measuring time of flight (TOF) but with a limited angular
acceptance; about 10% of the incoming particles go through TOF counters d1 and d3, which
are separated 2.34m in S1, S2 and S1’ and 2.38m in SB.
1Notation: When we refer to the hits in a set of counters we write only the counters’ numbersseparated by comas in between parenthesis. For example (3, 4, 5, 6) means a hit in counters d3, d4 andd5 within a time window; and also no hit in counter d6 in the expected time window.
19
Figure 9. W-T array setup sketch (front view), left to right eras: S1’ and SB. Each of them
utilizes two DAQ readout cards W and T. Trigger requires two hits in each DAQ card. The W
DAQ trigger output supplies one T input: labeled c2.
20
W-T array combines both W and T arrays into a single one. The trigger of W is used as
an input of T relaxing the T readout requirements from 2 hits to only 1 hit if W has already
satisfied a trigger. The interconnection of both arrays is done to improve the veto efficiency
since any W trigger is part of the T readout. The correlation of events from W and T1 also
becomes more efficient.
The components of our search are: an incoming charged particle, an indication that the
particle stopped in the apparatus and the creation of a new charged particle, which may traverse
more than one counter. In the muon decay case, these are referred to as: muon, stopping muon
with veto, decay and electron tag.
W-T array allows us to track the electron from the muon decay (electron tag). The electrons
exit d5 in a straight line, in all possible directions 2. The electron tag is defined with the hit in
d5 and only one extra hit in a counter around d5 within a time window 3 (except for counter d3
which is taken with d4 all together); all the remaining counters are use to veto passing-through
particles.
• Trigger for incoming muon: (3, 4, 5).
• Trigger for stopping muon: (3, 4, 5, 6, 7, 8) counters d3, d4 and d5 with a ”U” veto (veto
on d6, d7 and d8)
1The data correlation of W-T array is done by off-line software, see Section 3.1.2.
2Some of the electrons do not leave d5, see Table V at Section 4.2.2.1
3See Figure 15
21
• Electron: A hit following the stopping muon at counter d5. It does not have to satisfy a
trigger. We use the counters around d5 to track the electron and we define the electron
tag (e-tag). If the electron appears in counter d5 and no other counter around we call it
isolated (e-iso):
• e-tag: (3, 4, 5, 6, 7, 8) or (5, 6, 3, 4, 7, 8) or (5, 7, 3, 4, 6, 8) or (5, 8, 3, 4, 7, 6)
• TOF: Counters d1 and d3 in-time.
The electron tag increases the purity of the decay sample by requiring, not only a second hit
in d5, but another hit in the counters surrounding d5 (d4, d6, d7 or d8) at the same time. This
increased the electron purity and improved our signal to background discrimination. However,
S1’ configuration does not have d4 covering completely the top of d5. Therefore, we modified
the configuration so that d4 d6 d7 d8 formed an almost complete box around1 (see SB in
Figure 9) d5. Approximate 60% of electrons from muon decay fire one of the counters of this
box defining the electron tag.
Notice that the electron track may trigger W-T if it hits the lower corner d7, d6 and d5 for
S1’, because it has two W counters and one T. SB also has a trigger for the electron if it hits
either of the lower corners (d7, d6 and d5 or d8, d6 and d5). But only about 4% of the decays
will have a electron tag with that characteristics (independent electron trigger). This subset
of events allowed us to develop a technique to search for CHAMPs with lifetimes up to 0.1s
which is discussed in Chapter 5. We have constructed an array that maximizes the electron
1We refer to it as e-box
22
tag trigger to allow an improvement in to very long lifetime production limits (or signal) in the
future.
The passing-through and the stopping muon rates for S1’ and SB are presented in Table II.
23
S1’ SB
Muon Rate [Hz] 4.179±0.001 2.948±0.001
Track (counters) d3 d4 d5 d6 d3 d4 d5 d6
Counters not present veto d7 d8 veto d7 d8
Decay Rate [Hz] 0.04293±0.00007 0.04283±0.00005
Track (counters) d3 d4 d5 d3 d4 d5
Counters not present veto d6 d7 d8 veto d6 d7 d8
Run Duration 180 days 500k 215 days 700k decays
TABLE II
EXPERIMENT RATE COMPARISON: THE TRACKS FOR THE PASSING AND
STOPPING MUONS WERE TAKEN WITHIN 100NS TIME GATE. THE TRACKS SHOWN IN
THE DECAY BOXES CORRESPOND TO THE STOPPING MUON. TO CALCULATE THE
DECAY RATE, ANOTHER HIT WAS MEASURED IN D5 (ELECTRON TIME) AND THE TIME
DIFFERENCE BETWEEN THE STOPPING MUON AND THE ELECTRON WAS PLOTTED
AND FITTED (AFTER BACKGROUND SUBTRACTION) GIVING THE TOTAL NUMBER THE
MUON DECAYS; THEN THE RATE IS THE RATIO OF THAT NUMBER AND THE RUNNING
TIME.
24
Hits in S and J arrays (Figure 10) are used to select W-T events that were part of EAS
with multiple muons, which is correlated with higher energy of the incident cosmic rays.
Summarizing, the experiment consists of three detector arrays: W-T, S and J. Where W-T
is a stack of seven or eight counters; S and J are each made of four horizontally distributed
counters as shown in Figure 10. The counters used were made of plastic scintillation material
connected to 1”, 2” and 3” PMTs. Data was collected on four independent readout systems
consisting of 4-channel Fermilab Quarknet DAQ cards.
25
Figure 10. Experimental layout, front and side view for S1’ and SB. Also a top view that
includes S and J
CHAPTER 3
DATA ANALYSIS
Four separate DAQs were combined to assemble events containing hits from all fifteen
scintillation counters, based on absolute time stamps from the Global Positioning System (GPS)
signals. Software was developed in four stages, from the raw data to the confidence limit
determination. Understanding the raw data allowed us to fix several hardware problems and
we helped the DAQ developers to improve the firmware.
3.1 General procedure
Four streams of raw data come from three arrays of scintillation counters. Two of the DAQs
were used in the W-T detector to measure the stopping and decaying particles; and two more
DAQs for S and J arrays respectively to select high energy CR showers with multiple muons
(34). In total, there were sixteen channels with fifteen connected to scintillation counters and
a channel used for the trigger from the W array.
For W and T we used a QuarkNet Version 2.5 DAQ board (Series 6000) and for S and J
we used a QuarkNet Version 2.0 DAQ (Series 5000). The series 6000 (5000) had a least count
of 1.25ns (0.75ns). A higher output bandwidth and more stability was attained by the 6000
version, so the 6000 DAQs were used in the W-T setup. Trigger rates of the 6000 (5000) DAQs
were approximately 10Hz (0.1Hz).
26
27
W and T were interconnected to share the same GPS signal: both DAQs Series 6000 are
daisy chained, W GPS output is connected to T GPS input (T gets the GPS information
through W).
W and T were connected to a GPS to have a precise clock and to be able to synchronize
with S and J. Synchronization of EAS events in DAQs with separated GPS units is within
O(100ns) (standard deviation of 126ns, see Figure 11). However the interconnection between
W and T allows a precision of O(10ns) (standard deviation of 20ns); the in-time events fall
with a standard deviation of 8ns. There are also fluctuations of 40ns, since the GPS’ one Pulse
Per Second (PPS) is set with 25MHz clock time stamp, two consecutive time stamps are 40ns
apart (Figure 12), and their phase-locking is shifted by 1-clock pulse.
S and J also share another GPS antenna, but in this case, a specially designed fanout card
was used to split the GPS signal into both DAQs. The time difference between W-T events
and S(J) is displayed in Figure 11 (standard deviation of 126ns).
3.1.1 Trigger definition
Each DAQ has four input channels. The W and T DAQs were programmed to save events
with a hit in at least two counters opening a time window of 164µs (two-fold trigger). The
DAQ has a buffer memory that saves events up to 2µs, i.e. hits are saved up to 2µs awaiting
the trigger condition to be satisfied to initiate readout. This time gate is also named look-back
time because it allows comparison of incoming hits with earlier ones. The DAQ output consists
of leading edge and trailing edge of each pulse with a time stamp. The leading edges and
28
Figure 11. Synchronization of CRs through DAQs with different GPSs: W-T has a time
resolution of 40ns for the GPS PPS and S/J have a time resolution of 24ns. S is considered
in-time with W-T if the time difference is within -3000ns and 0ns (red dashed vertical lines).
Also J is in-time with W-T if the time difference is within -1500ns and 0ns.
29
Figure 12. Time distribution of W-T across DAQs: W-T has a time resolution of 40ns. The
red dashed vertical lines, at -130ns and 20ns, indicate the time window used to define the
in-time events.
30
trailing edges times are measured when the negative voltage pulses go below a preset threshold
Figure 13.
The two-fold time windows were set to almost the maximum values to allow capture of the
maximum possible range of decays. To be able to analyze the data, a software time window
had to be implemented, defining the two counter in-time hit within 75ns.
Figure 13. Signal, raw data and reconstructed data: from the signal at the scintillation
counter to the reconstructed data. The least count time is the DAQ clock tick (40ns for DAQ
6000 and 24ns for DAQ 5000).
31
To guarantee that the counters were working in an efficient regime their high voltages were
plateaued. All DAQ discriminator thresholds were set at 30mV and their PMTs’ high voltage
were varied as shown in Figure 14. In addition, each counter was set in two-fold coincidence
with a known efficient counter.
The final detector setup has the W trigger output connected into one of the T DAQ channels
(channel 2); in that way the T array only has three counters but it needs only one in-time counter
with the W trigger to fire the T trigger.
The trigger has two instances, the first selection is done within the hardware. The T and W
DAQs are set to save data if a minimum of two channels are within the time gate of 163.830µs
with a buffer time of 2.500µs. Since hits are saved in a buffer, the trigger also captures hits that
occur up to 2.5µs earlier. Thus the 2.5µs gate is referred as a ”look-back” time. After that,
the software selects the data that has two counters within 75ns (Figure 15); which represents
3.5σ in the in-time event distribution (average standard deviation of the distribution is about
10ns).
The reconstruction software also looks for in-time hits within the time gate extracting all
coincidences in the saved event. That means that the gate time is not a dead time; we will
show that the dead time after a hit is dominated by its own pulse width.
S and J arrays are solely used to tag W-T decays for big showers. S and J triggers require
two counters. The time gate and the look-back time is not the same for both arrays; it is set
wide enough to capture events within 400ns (the fine time selection is made with software).
The time difference required between S or J and W-T is much bigger than the 75ns (Figure 11).
32
Figure 14. Detector calibration plateaus: WT scintillation counters calibration using cosmic
rays muons; the graphs show the rate of the counter under study (single) as well as the
two-fold coincidence rate of that counter with a second overlapping counter.
33
Figure 15. Time distributions for W-T counters: The red dashed vertical lines, at -130ns and
20ns, indicate the time window used to define all the in-time events respect to counter d5.
The multiple peaks correspond to time differences across DAQs. The time differences that
correspond to counters at different DAQs, show two sets of two 40ns separated peaks instead
of only two; this happens due to a change in the time delay setup; nevertheless the mean value
of the time differences remain unchanged. This distributions have standard deviation of 8ns
for counters at the same DAQ and 20ns for counters at different DAQs.
34
This occurs because the in-time hits correspond to different particles in the shower (the counters
are separated horizontally).
3.1.2 Processing the raw data
The raw data is processed in four stages.
• The first stage goes from the multiple-line-per-event hexadecimal raw data into one line
per event containing the time for the leading edges and trailing edges of all the hits in all
the channels (Figure 13). The GPS time plus the clock fine tune is brought to a time in
seconds from January 1st 1970 UTC (Unix epoch, December 31st 6pm, 1969 in Chicago
time) and into the least count time step which is 0.75ns for DAQ 5000 and 1.25ns for the
DAQ 6000. The GPS produces a Pulse Per Second (PPS) clock stamp into the raw data
that is used to calibrate the processor’s clock speed. The absolute time for each hit in each
counter is calculated using the GPS PPS signal (calibration pulse), DAQ processor clock,
and a custom made chip to interpolate times between clock ticks which makes 32 extra
ticks within a processor clock pulse. More specifically: the GPS time at the moment of
the trigger is latched, the number of processor clock pulses until the counter hit is saved,
as well as the counts of a faster clock that interpolates to 32 (5 bits) time slices between
the 5000 (6000) processor clock (fine tune time).
• The second stage fixes identified hardware errors as well as calculating an average clock
frequency with hundred events to guarantee that the fluctuation in the frequency would
not affect times (fluctuation under 40ns for times as big as 1s). The frequency fix is only
applied to the 5000 since the 6000 does an automatic fix every PPS.
35
• The third stage correlates the three arrays W-T, S and J and put the data into a single
file with 16 channels Figure 16. Each line of the 16 channel file is carefully correlated
among the arrays within defined time windows (130ns for W-T, 3µs for WT-S, 1.5µs for
WT-J. Figure 15 and Figure 11). Each array has a prefix code: -1, -2, -3 and -4 for W,
T, S and J respectively. Correlations between W and T are always present because W-T
works as a single array, S and J are only present when a correlation occurs Figure 17.
• The fourth stage does a sophisticated search of multiple hits in-time within an event (single
line in the 16c file) and extracts all possible tracks. Later it searches for tracks satisfying
a defined trigger condition and a veto requirement. Software triggers and vetoes can be
redefined multiple times by simply masking the counters in the track and in the veto.
A trigger can be made for a stopping particle or for a decaying product independently.
That flexible structure makes possible a search for very long lifetimes (lifetimes outside
the time gate) by combining hits from multiple events.
An event display was developed to study decay event properties as well as background
topologies and was used to check problematic events. It was an important tool to find and
understand the hardware errors as well as for testing the correction scripts (Figure 17).
We have two different data analysis paths to measure lifetimes. Acquisition time-gate lim-
ited lifetime (short lifetimes, up to the DAQ time gate) and very long lifetimes by combining
separately trigger events. Both techniques are treated with the same data analysis tools and
we set limits for both of them.
36
Figure 16. Data flow: First stage in orange, second in blue, third in pink and fourth in green.
37
Figure 17. 16 channel output and event display showing a muon decay with an S and J tag:
-2, -3 and -4 prefix for T, S and J respectively. ”X” shows an track for the stopping particle
and ”x” a track for the decay product. 2 shows that there is a following hit within 100ns of
the previous LE.
38
Both have advantages and disadvantages. The lifetimes calculated within the acquisition
time-gate period do not require a trigger for the decay product, therefore every single hit in
counter d5 of DAQ W is taken as a possible decay. In this way the decay sample has no trigger
acceptance limitation, but it has the disadvantage of having a limited time window of 164µs.
The very long lifetime analysis was carried out as a proof of principle. It needs a trigger for
the decay product requiring two counters in DAQ W and one in DAQ T. This is a big limitation
in the acceptance; for instance, in muon decays, the electron has to leave counter d5 to satisfy
the trigger requirement, resulting in a 40% data loss. But the advantage is that the decay can
be produced seconds later, therefore the time window for the decays is extended from 164µs
to seconds, spanning over eight order of magnitudes, and the only limitation is the background
rate. To be able to accomplish that we need to understand very well how the randoms (dashed
line at Figure 18) appear in the lifetime calculations.
Notice that within the acquisition time gate the very long lifetime sample is a sub set of the
time gate limited one.
39
Figure 18. Montecarlo simulation for randoms (black dashed line) in a simulated 20µs-lived
charged particle.
40
3.2 Hardware studies
Calibration and hardware errors
The look-back time buffer or TMC delay affects the absolute time calculation, and it could
be the main contribution in the time calculation across DAQs Figure 19
The frequency fluctuations in the DAQ 5000 affects the time distributions and the coinci-
dences Figure 20.
DAQ 6000 v1.10 has a 50ms to 80ms delay before it applies the PPS respect to the GPS
time change Figure 21. If not repaired about 10% of the data will contain the incorrect PPS
information and never correlate. This problem appears very frequently but not every second;
that made the problem difficult to identify. About the first 70ms after the second (GPS time)
gets associated with the previous second (hardware association) and that appears in the data as
new data jumping backwards in time. Another issue to correct was the 1s delayed introduced
in the DAQ 6000 v1.12 in an effort to solve the 50ms to 80ms delay in the PPS. That shift has
to be corrected to allow correlations.
On top of the already mentioned items, there are several issues we fixed by software in the
DAQ 6000 v1.12 and v1.10. There is a one-second jump ahead at midnight (UTC) 6000 v1.12
and one second ahead and behind at midnight v1.10. And also: GPS PPS frozen, sudden GPS
time jump that gets eventually self corrected.
41
Figure 19. Event time calculation: The diagram shows a particle hitting two counters at the
same time (green cross). The counters are connected to different DAQs. The PPS at the GPS
antenna (black dot) is received by the DAQ later, due mainly to cable delay (left red cross).
Finally, the time stamp the hit receives at the read out level (right red cross) is delayed
mainly by the buffer time or TMC delay (look-back time) and also by the signal cables and
the DAQ processing time.
42
Time [ns]
Time [ns]
Figure 20. DAQ 5000 frequency correction: The top graph shows how the time drifts when
plotted respect to the PPS reference, as a function of time since the last PPS. The bottom
graph shows the DAQ clock frequency after correction.
43
Time [ns]
Time [ns]
Figure 21. DAQ 6000 PPS delay respect to the GPS latched time: There is a delay of about
50ms to 80ms. The PPS is applied to the data long after the GPS information arrived. The
left plot shows the first 70ms attached to the previous second and the right plot shows the
absence of data for about the same period of time.
44
3.3 Detector studies
Lifetime turn on (dead time)
The muon lifetime graph displays a turn on or dead time (Figure 22, top graph) because the
muon and electron signal both occur in counter d5, electron signals that occur before the muon
signal ends are lost. This dead time is determined by the PW of the muon signals (Figure 22,
bottom graph).
45
Time [ns]
Time [ns]
Figure 22. Muon lifetime turn on (top), stopping muon pulse width (bottom)
46
Trigger bias effects.
A hump (an enhancement above the expected muon lifetime at times smaller than 2µs)
appearing in the muon lifetime (Figure 23) was understood through the use of the event display.
These extra events are produced by muons that stop in d5 but leave no other hits in the W
DAQ, so the muon does not satisfy the trigger. However an electron that hits two W counters,
does satisfy the readout trigger. In this situation the muon will be recorded only if the TMC
delay or buffer time window opened by the electron catches it, and the figure shows that the
hump lasts for exactly the TMC delay. Since the electron satisfies the trigger can be mistaken
as a muon and a hump appears at negative times but with the muon lifetime, therefore, not a
background process. The extra hump events are muons with a much larger acceptance (hence
higher rate)1.
This effect is fixed by requiring selecting stopping muons to satisfy the W-T trigger in both
arrays.
1Figure 23 does not correspond to SB nor S1’ data; it corresponds to a calibration run that exaggeratesthe effect of the trigger bias.
47
Figure 23. Trigger bias ”hump” in the muon lifetime distribution.
CHAPTER 4
RESULTS
In this chapter we present the limits for long-lived charged particles within a lifetime window
from 2µs to 160µs for both data samples S1’ and SB. Results for EAS of about 100 TeV are
also discussed. First we present the measurement of the µ lifetime including the effect of the
µ− capture, since muons are our main background for this range of lifetimes.
4.1 Muon lifetime measurement
The aim of this work is to study lifetimes larger than the muon lifetime using the muon
lifetime as a known background process. It is difficult to search for lifetimes below the muon
lifetime, in part because our scintillation signals can be hundreds of nanoseconds wide, compli-
cating lifetime issues, but also because the muon lifetime is reduced in matter. Negative muons
have a non-negligible probability of interacting with a detector’s nuclei before decaying. When
a negative muon gets close to a nucleus, a proton captures the muon producing another nucleus
and a muon neutrino:
p+ + µ− → n + νµ
In our case we use plastic scintillators which are mostly composed by carbon, then this
capture process takes place between negative muons and carbon nuclei and an excited boron
nucleus is produced:
12C + µ− →12 B∗ + νµ
48
49
This process thus reduces the muon lifetime measured in matter. The capture rate is a
parameter fit from data and therefore we have limited sensitivity for another particle lifetime
below the 2µs range.
4.1.1 Muon capture rate
The muon lifetime is fit in the range from 400ns to 10µs. We use the known value of the ratio
of positive to negative CR µs: r ≡ µ+/µ− = 1.10 ± 0.03 (Figure 7) to extract the µ− capture
rate. Since we do not want to weight one value more than another within the allowed error,
we use a flat uniform prior distribution (principle of maximum entropy (39)) for r within the
interval giving by ±0.03, fitting with the Least Squares Modified Stabilized method described
at Appendix B.
The negative muon capture rate rcap is fitted using the following function (Figure 24):
Nfit = r · Nµ− · exp(−t/τµ+) + Nµ− · exp(−t/τµ−) + const (4.1)
Where:
τµ+ = 2.197034µs
1τµ−
= 12197.034ns
+ rcap, where rcap ≡ 1τcap
Obtaining the capture rate rcap for stopping muons hitting counters d3, d4 and d5 with
U-veto (3, 4, 5, 6, 7, 8) and a single1 electron tag (4, 6, 7, 8):
1Single electron tag means that only one of the e-tag counters is hit at the time the electron appears.
50
rcap = 39.7 ± 2.3kHz
This result is within the values expected from the specific muon capture rate measurements
for carbon 12C (12).
The average muon lifetime at Figure 24 is obtained from a single exponential plus a constant
background fit:
Nfit = Nµ · exp(−t/ < τµ >) + const (4.2)
Obtaining (requiring U-veto and e-tag):
< τµ >= 2116 ± 4ns
To obtain the capture rate we use the fitting technique based on the weighted least squares
with the sum of two exponential functions (Equation 4.1). However the technique has a novel
modification explained in Appendixes B.1 and B.2, that we use to improve the error calculation
in the fitted parameters (Appendix B.1), and to avoid the fit weight exponential bias (Appendix
B.2). The constant term is due to the random coincidence of an electron-like signal and a muon
satisfying stopped muon conditions.
51
Figure 24. Muon capture rate fit: Both graphs show events (3, 4, 5, 6, 7, 8), counters 3.4.5 with
a ”U” veto. The second graph includes the electron box tag requirement with one extra hit in
counters (4, 6, 7, 8), which is a ”box” tag. The electron tag reduces the statistics a factor of
two but reduces the background a factor of 9. The capture rate measured in both cases do not
show significant differences from 39.7 ± 2.3 kHz which is a capture mean time of 25.2 ± 1.4µs.
52
4.1.2 Muon lifetime East-West asymmetry
This section is intended as a consistency check to highlight the sensitivity of the fitting
procedure. We show the differences in the muon lifetime fit due to the stopping muon incident
direction. This is consistent with the slight deflection difference between positive and negative
slow1 muons through the atmosphere.
The SB layout North-South orientation (Figure 10) allows us to see the µ+, µ− flux asymme-
try due to the Earth’s magnetic field. The side counters 7 and 8 face West and East respectively.
We select muons coming from West and East with a geometric constraint on the zenith angle
(Figure 25). The Earth magnetic field is, in most situations, very weak O(10−5Tesla), but we
have to consider that the stopping muons typically have momentum less than 1GeV and the
field is applied over a 15km distance. To give an order of magnitude, a muon with momentum
of 1GeV traveling through a 20µT magnetic field experiences a deflection of about 5◦ respect
to the incident direction. Thus, it is reasonable to expect an average deviation of about 10◦
between positive and negative muons which it may be observable in the µ+, µ− flux ratio.
The lifetime is measured always at the same counter, counter 5 (W readout); we are only
selecting the direction of the stopping muons. Notice that selecting veto or electron tag does
change the sample but it does not change the muon capture rate, nor the average lifetime.
However, when we select East or West incident angles we observe significant differences in the
lifetime fit.
1Stopping muons have an initial momentum smaller than 3GeV .
53
Muon charge ratio asymmetry fit procedure:
• The first step was to measure the µ− capture rate using the known muon charge ratio
for stopping muons r = µ+/µ− = 1.10 ± 0.03 (Figure 7). The fit was done using a
uniform random distribution in the range of r ∈ (1.07, 1.13) using 100 tries as described
in Appendix B.1. The µ− lifetime measured for vertical muons is (center drawing in
Figure 25):
τµ− = 2020.6 ± 9.3ns (4.3)
• Using our µ− lifetime measured, we fit r = µ+/µ− for East and West stopping muons. The
µ− uncertainty was introduced using a Gaussian random number generator N(2020.6ns, σ =
9.3ns) with 100 tries. Muon charge ratio for West incident stopping muons (left drawing
in Figure 25):
r = 1.70 ± 0.46 (4.4)
• Muon charge ratio for East incident stopping muons (right drawing in Figure 25):
r = 0.82 ± 0.21 (4.5)
54
Figure 25. Muon East-West asymmetry: There is a shift in the µ+ and µ− ratio (r = µ+/µ−)
for stopping muons due to the Earth magnetic field applied over approximately 15km. By
fixing r = 1.1 for the center configuration, we allow r to float for the other configurations and
find r = 1.7 for the West flux, and r = 0.8 for the East flux shown from left to right in the
diagram. Blue and red counters have a muon signal and black counters have no signal at the
time of arrival of the µ (counters on black are set to veto the event), red counters are
connected to W readout and blue counters to T.
55
The values obtain for r (Equation 4.4 and Equation 4.5) are within what we expect. If we
consider that the muon flux has a cosine square dependency with the zenith angle (10) and that
dependency holds for low momentum muons (stopping muons), we can estimate the ratio r from
the distributions Figure 26. The 5.5◦ deviation angle is consistent with the rough estimation
we performed at the beginning of this section.
We also observe significant differences in the average muon lifetimes among the three con-
figuration (Table III).
56
Figure 26. Muon East-West flux estimation: We propose a cos2(θ) dependency for the µ+
and µ− individual fluxes, with a 10% excess of µ+ respect to µ− total flux. Then we shift
both distributions 5.5◦ to West and East respectively. Notice that the sum of fluxes has an
approximates cos2(θ) dependency as well.
57
Because counters 7 and 8 (West and East counters) are used to define West and East
configurations, we measure the lifetime for the central configuration, with and without counters
7 and 8 in the veto, to demonstrate that they do not influence the average lifetime. We compare
the fit using the U (counters 6, 7 and 8) veto (3, 4, 5, 6, 7, 8) and the bottom (counter 6) veto
only (3, 4, 5, 6).
Average lifetime for stopping muons with bottom veto (3, 4, 5, 6):
< τµ >= 2115.9 ± 3.1ns
In addition, the average lifetime for stopping muons with U veto (3, 4, 5, 6, 7, 8)
< τµ >= 2114.9 ± 3.1ns
Average lifetime for stopping muons with U veto and electron tag (3, 4, 5, 6, 7, 8), single
electron tag (4, 6, 7, 8)
< τµ >= 2115.7 ± 3.8ns
This measurement demonstrates the precision of our fitting procedures (Appendix B) and
data analysis path. Although we show that we can distinguish very small lifetime differences
in high statistic samples, we apply the same technique to our CHAMP measurement where the
lifetime difference between the muon and CHAMP are very large in comparison.
58
West Vertical East
Average lifetime e-tag < τµ > 2133 ± 10 2115.7 ± 3.8 2102.7 ± 9.7
TABLE III
AVERAGE LIFETIME EAST-WEST ASYMMETRY: THE LIFETIMES ARE FIT WITH A
SINGLE EXPONENTIAL PLUS A CONSTANT BACKGROUND, IGNORING ANY
EFFECT FROM THE MUON CAPTURE IN THE FIT. ALL LIFETIMES ARE
MEASURED ON THE SAME COUNTER (W COUNTER 5). THE ELECTRON TAG
(4, 6, 7, 8) REDUCES THE AMOUNT OF MUONS BY A FACTOR OF 2 BUT ALSO
REDUCES THE BACKGROUND BY A FACTOR OF 9.
59
4.2 Setting limits
The readout of SB and S1’ samples opens a time-gate for every satisfied trigger as explained
in the Data Analysis chapter (Section 3.1). The time-gate is 163.840 µs. In this period of time
after the trigger, every hit in all counters are recorded (for both, T and W readout). If a new
trigger occurs during that time, the beginning of the time-gate is reset, extending the readout
time. This feature allows us to analyze all hits within this time but sets a limit in the lifetime
search window. In the next chapter we relax this limit for a small fraction of the data sample
going to times of O(1s).
4.2.1 Cleaning the data sample
To ensure that the scintillator counters are well behaved over the 2-year data taking period,
we define the following good data criteria:
• All data events must have a GPS-satellites lock and a good GPS one PPS signal (Section
3.1.2).
• We make a histogram of all the data (rates of both stopping and pass-through particles)
in 1-hour data chunks, and we keep only the 1-hour runs that agree with less than four
standard deviations (4 − σ) from the overall mean value (Figure 27).
Figure 28 shows S1’ and SB passing through muons in 1-minute bins and plotted over
time (days). We can appreciate a small fluctuation over time that correlates with atmospheric
pressure, upper atmosphere temperature, solar activity, etc. However large variations indicate
detector failures.
60
S1’ and SB
Figure 27. S1’ and SB 1-hour rates to select the good data: 1-hour rate for the passing
through muons counters 3,4 and 5 with a veto in the side counters 7 and 8 (3, 4, 5, 7, 8). The
red is the good data selected within four standard deviations from the mean rate.
61
S1’ and SB
S1’ and SB
Figure 28. S1’ and SB rates: The upper plots show the 1-minute rate histogram for the
passing through muons counters 3,4 and 5 with a veto in the side counters 7 and 8
(3, 4, 5, 7, 8). The lower plots show the same date plotted respect to time in days.
62
Just as a curiosity, we plotted the passing through muons (3, 4, 5, 7, 8) with 1-day bins
instead of 1-minute bins showed before; and we overlap the inverse of the average atmospheric
pressure in Chicago at that day and the average number of sun spots at that month (Figure 29).
We do not correct for that since the fluctuations are very small.
Figure 29. Sources of fluctuation of muon rates at sea level: The plots show the 1-day rate
histogram for the passing through muons counters 3,4 and 5 with a veto in the side counters 7
and 8 (3, 4, 5, 7, 8) versus time and a comparison with the inverse of the average Chicago’s
atmospheric pressure and the monthly average number of sun spots.
63
4.2.2 Lifetime spectrum comparison
There are two main backgrounds in the decay spectrum: muon decays for t<10µs and
random coincidences of muon and electron signals for t>10µs 1. For longer lifetimes, the main
background is passing-through muons that fail to trigger the veto counters.
Figure 30 shows a simulation of the decay of a long-lived charged particle (20µs) as we
expect to appear in our data.
Figure 30. Montecarlo simulation of a long-lived charged particle: The graph shows muon
decays and a 20µs charged particle with a decaying rate of 1% (left) and 0.1% (right) of the
muon decay’s rate.
1The time used to separate the muon decay background and the random coincidences, depends ofthe statistics collected, we use 10µs as an example.
64
4.2.2.1 Muon decays
The Figure 31 and Figure 32 show the muon lifetime without and with the muon veto.
We discuss the effects of the veto, electron tag and electron isolation on the improvement of
the background rejection. No restriction on the electron means that the lifetime is measured
considering any hit in counter 5 after the incident muon (counters 3,4,5). No veto means that
going-through muons are not rejected. The electron tag (e-tag) is a hit in the surrounding
counters (only one hit at counter 4, 6, 7 or 8, e-box counters) synchronized with the electron.
Ideally, the isolation condition means that the electron does not leave counter 5, but there is
a small contamination from a small fraction of trajectories leaving from counter 5 where the
electrons can hit more than one counter in the box as well as from inefficiencies in the e-box’s
counters.
The electron tag and electron isolation keep approximately 90% of the muon decays (60%
e-tag and 30% e-iso) and make the background about an order of magnitude smaller. The same
behavior happens with veto and without veto (Table IV).
We choose to present farther analysis with the sample of muon µ-veto.e-tag (as highlighted
in Table IV) because it provides the best sensitivity to a possible CHAMP signal that populates
the tail region.
65
Figure 31. Lifetime spectrum comparison (3, 4, 5), counters 3,4,5 with no veto. The top graph
is for S1’ sample and the bottom one is for SB sample. Comparison between the lifetime
measured for all electrons and the lifetime for electrons with an electron tag and electrons
that do not leave counter 5 (isolation condition).
66
Figure 32. Lifetime spectrum comparison (3, 4, 5, 6, 7, 8), counters 3.4.5 with a ”U” veto. The
second graph includes an electron tag requirement (4, 6, 7, 8), which is a hit in the surrounding
counters (”box” tag). The electron tag and electron isolation reproduce 90% of muons (60%
e-tag and 30% e-iso) and improves the background about an order of magnitude. The same
behavior happens in the non-veto sample.
67
Number of muon Number of Muon decay Ratio of Relative ratio of
decays (x103) tail events efficiency tail and decays tail and decays
e-all 1514.2 ± 2.1 178710 ± 420 − − 11.80 ± 0.03% −
e-tag 954.5 ± 1.6 12230 ± 110 63.0 ± 0.1% − 1.28 ± 0.01% 10.9 ± 0.1%
e-iso 388.5 ± 1.0 9111 ± 95 25.66 ± 0.08% − 2.35 ± 0.03% 19.9 ± 0.2%
e-tag+iso 1342.9 ± 2.3 21340 ± 150 88.7 ± 0.2% − 1.59 ± 0.01% 13.5 ± 0.1%
µ-veto e-all 1152.7 ± 1.8 4058 ± 64 76.1 ± 0.2% − 0.352 ± 0.006% 2.98 ± 0.05%
µ-veto e-tag 742.3 ± 1.6 316 ± 18 49.0 ± 0.1% 64.4 ± 0.1% 0.043 ± 0.002% 0.36 ± 0.02%
µ-veto e-iso 328.1 ± 1.0 252 ± 16 21.67 ± 0.08% 28.46 ± 0.04% 0.077 ± 0.005% 0.65 ± 0.04%
µ-veto e-tag+iso 1070.4 ± 1.8 568 ± 24 70.7 ± 0.2% 92.9 ± 0.1% 0.053 ± 0.002% 0.45 ± 0.01%
TABLE IV
MUON DECAYS AND BACKGROUND COMPARISON: SB AND S1’ ARE SUMMED IN
THIS TABLE. THE NUMBER OF DECAYS ARE CALCULATED FROM THE LIFETIME
FIT WITH THE BACKGROUND SUBTRACTED. THE NUMBER OF ”TAIL” EVENTS IS
DEFINED AS THE NUMBER OF EVENTS BETWEEN 100µs AND 160µs. THE MUON
DECAY EFFICIENCY IS CALCULATED AS THE RATIO BETWEEN THE MUON
DECAYS AND THE NUMBER OF DECAYS WITH NO VETO NOR TAG/ISO
RESTRICTIONS (e-all). THE SECOND MUON DECAY EFFICIENCY COLUMN IS
CALCULATED RESPECT THE NUMBER OF DECAYS µ-veto e-all. THE RELATIVE
RATIO OF TAIL AND DECAYS IS CALCULATED RESPECT TO THE e-all RATIO.
68
4.2.2.2 Muon decays within large EAS
We consider particles produced in large EAS with CR incident energy bigger than 100 TeV
for the subset of data with S and J tags (34). EAS produce also a larger number of particles
per unit of area that hit the counter with the incident stopping particle, increasing the chance
that another particle hits the veto counters. Passing-through muons within EAS exhibit the
same behavior. Therefore, we cannot use the veto condition with the S and J tag instead, we
use the ”electron” tag to restrict the background.
Notice from Figure 31 to Figure 32 that the electron tag with no veto in the incident muon
reduces the background an order of magnitude; therefore the sample (3, 4, 5) with no veto and
e-tag is the sample selected to study the large EAS (S and J tags) because it saves the stopping
muons that may have a partner going through the veto counters which vetoes the event; if the
veto is applied with S or J tag almost all the decaying muons self-veto.
Figure 33 and Figure 34 show the muon lifetime for large EAS (S-tag and J-tag) for S1’
and SB sample respectively, both with no veto applied. The number of decaying muons is
almost 3 orders of magnitude smaller than in the full sample but still measurable with a 5-10
% uncertainty. The number of muon decays are presented in Table V1. Notice that the e-tag
has a good suppression of background.
1The binned distribution leads to a discrete exponential which cannot be considered as continuoussince the effect of binning is not negligible. A discussion of how to obtain the total number of decays isat Appendix C
69
Figure 33. Lifetime EAS S-tag, no veto: Notice the significant reduction of the background
with the electron tag requirement.
70
Figure 34. Lifetime EAS J-tag, no veto: Notice the significant reduction of the background
with the electron tag requirement.
71
S-TAG: S1’ and SB
J-TAG: S1’ and SB
Figure 35. Lifetime EAS J-tag, with veto: The veto on the stopping muon also veto the
muons we detect without the veto condition.
To verify that EAS have an extra particle that hits the veto counters (6-bottom, 7-side,
8-side), we measure rates in for particles passing through vertically (1, 3, 4, 5, 6) and count how
many of these events have a hit in the side counters 7 and/or 8, first for the whole sample S1’
72
and SB and afterwards for the S-tag and J-tag subsamples. The test checks if vertical muons
are accompanied by side muons when S or J tag is required.
Vertical muons (1,3,4,5,6) with a veto in the vertical counters (7,8):
SB
ALL #1345678#13456
= 81.4 ± 0.1%
S-TAG #1345678#13456
= 5.1 ± 0.1%
J-TAG #1345678#13456
= 3.7 ± 0.2%
S1’
ALL #1345678#13456
= 90.1 ± 0.1%
S-TAG #1345678#13456
= 5.2 ± 0.1%
J-TAG #1345678#13456
= 3.4 ± 0.2%
Notice the particles passing through vertically with side veto rate goes down when S/J tag
is required. The first conclusion is that almost all passing particles in the sample do not have
an extra hit in the side counters (7,8) but when concurrent hits in the S or J array are required,
a very small portion do not have a hit in the side; therefore almost all the passing muons with
S/J tag have an extra hit in 7 or 8 or both. Another thing to notice is S/J tag rate ratios are
almost independent of the detector layout SB or S1’.
Another interesting rate is the one that has vertical muons (1,3,4,5,6) with a hit in both
vertical counters (7,8), which indicates that another particle has to be present since there is no
angle that allows a particle to hit all of them:
SB
73
ALL #1345678#13456
= 6.21 ± 0.02%
S-TAG #1345678#13456
= 76.6 ± 0.6%
J-TAG #1345678#13456
= 82 ± 1%
S1’
ALL #1345678#13456
= 3.33 ± 0.01%
S-TAG #1345678#13456
= 79.4 ± 0.5%
J-TAG #1345678#13456
= 84 ± 1%
Before the S/J tag is required, the number of particles that hit both of the side counters
is relatively small; but for the S/J tagged events most of them have both sides (counters 7,8)
with a hit. Therefore it has to be another particle hitting 7 and/or 8 when S/J tag is required
and then these events are going to be veto for the lifetime measurement1.
1Assuming that S/J tag behaves in the same way for passing and stopping muons
74
S1’ e-tag
Number of muon decays (539 ± 3)103 (311 ± 1)103
e-tag e-tag S-tag e-tag J-tag
(539 ± 3)103 (311 ± 1)103 540 ± 27 221 ± 16
SB e-tag
Number of muon decays (676 ± 3)103 (431 ± 1)103
e-tag e-tag S-tag e-tag J-tag
(362 ± 3)103 (231 ± 1)103 393 ± 23 151 ± 16
TABLE V
NUMBER OF MUON DECAYS COMPARISON: THE NUMBER OF DECAYS IS
CALCULATED FROM A SINGLE EXPONENTIAL FIT. THE NUMBER OF MUON
DECAYS WITH ELECTRON TAG PLUS U VETO IS 80% OF THE NUMBER DECAYS
WITH E-TAG.
75
4.2.3 Negative log-likelihood implementation
The lifetime plots for our six data sets are presented in Figure 36. From the fits, there appear
to be no obvious evidence of another lifetime component. Thus we need a more sensitive method
that allows us to see quantitatively a possible small signal, or set limits if the null hypothesis
is confirmed.
The confidence limit evaluation we are using is based in the Fermilab COLLIE (40) procedure
which is a semi-Frequentist/Bayesian construction (41; 42; 39). This procedure tests two hy-
pothesis, the Null or Background-Only hypothesis (B) and the Test or Signal-Plus-Background
hypothesis (S+B). From the muon lifetime and background fits in Figure 36 (Background-Only
hypothesis for the confidence limit evaluator) we propose a signal (CHAMP decay), with a
lifetime we vary in the whole analysis range, and an increasing number of produced CHAMPs
that we adjust to determine the limits.
The test we implement is the Poisson Log-Likelihood Ratio (LLR) Test. In general the
likelihood (L) function is the product of the probability density functions (PDFs) of a list of
results compared with the expected values; L is the same function we maximized every time we
implement a fit 1. Notice that L is the product of PDFs and then, it is a probability itself with
all the properties of a PDF.
1Least squares minimization leads to the same equations of the maximization of the pertinent likeli-hood function.
76
S1’ and SB
S-tag: S1’ and SB
J-tag: S1’ and SB
Figure 36. S1’ and SB lifetime fits: counters (3, 4, 5) with e-tag (no veto). The muon lifetime
is measured from 400ns to 10,000ns using a bin of 1000ns. The background is fit from
100,000ns to 160,000ns using a bin of 10,000ns. In the S+J tagged sample plots, the bin
changes from 1,000ns to 10,000ns at 30µs and at 20µs to obtain more statistics. The lifetime
fits for the top graphs show no effect from 1σ fluctuation in the background. S and J tag fits
use the lifetime fitted from all decays to determine the number of muons.
77
The Poisson LLR Test is implemented as follows:
1. We used 10 bins (M=10) to calculate the likelihood of fluctuations, each bin follows a
Poisson PDF. For each Loop τsg and Nsg
NLLR = −2 · log(
Ls+b
Lb
)
= −2 · log(∏M
i=1 f(νi|si(τ,N))∏M
i=1 f(νi|bi)
)
(4.6)
where f is the Poisson PDF and νi are the values where the probability wants to be
calculated1. Basically νi are either the numbers measured from data or the ones sampled.
f(νi|si(τ,N)) =exp(−si(τ,N))·si(τ,N)νi
νi!and f(νi|bi) =
exp(−bi)·bνii
νi!
2. To construct the likelihood functions Ls+b and Lb for the NLLR test, multiple series of
105 Poisson pseudo experiments were produced per bin (with M=10 we have a million in
total) with the following procedure:
(a) We sampled randomly Lb with a million of events (10x105 Poisson pseudo exper-
iments centered at bi obtaining 105 sets of random numbers bi) for each possible
signal hypothesis, i.e. we run on number of signal decays (Nsg) and signal lifetime
(τsg) and obtain the NLLRb test for background only likelihood Lb
1Notice that the ratio of likelihood functions L is a probability too, more precisely a conditionalprobability since it is the ratio of probabilities. Taking the log makes the test more sensitive for smallamount of signal.
78
NLLRb = −2 · log(
Ls+b
Lb
)
= −2 · log(∏10
i=1 f(bi|si(τ,N))∏10
i=1 f(bi|bi)
)
1 (4.7)
(b) In a similar way we sample randomly Ls+b, but this time we have to sample for every
possible signal plus background situation (i.e. 10x105 Poisson pseudo experiments
centered at si(τsg,Nsg) obtaining 105 sets of s+b random numbers si) and obtain
the NLLRs+b test for signal plus background likelihood Ls+b
NLLRs+b = −2 · log(
Ls+b
Lb
)
= −2 · log(∏10
i=1 f(si|si(τ,N))∏10
i=1 f(si|bi)
)
(4.8)
(c) NLLRb and NLLRs+b are then binned using a bin size of 0.2 NLLR; a 5-point smooth
algorithm is applied and the intermediate NLLR values are linearly interpolated for
Lb and Ls+b2 .
3. Calculating the p-values:
The p-value is defined as the probability of obtaining the LLR test, for a hypothetical
signal, bigger than the random fluctuations of the background (assuming that the back-
ground hypothesis is true).
We calculate the p-values with the following procedure:
1Notice that bi is a Poisson random number generated from bi, bi is a estimated value which is fixednumber.
2Notice that Lb and Ls+b are functions of 10 variables each; but now they are written as a functionof a single variable, the NLLR test.
79
(a) To obtain the 95% Confidence Level (CL): CLs(bi) = 5% for the estimated back-
ground bi: We have a double loop nested for each τsg we loop on Nsg :
For each τsg we found Nsg to get CLs(bi) = 5%
Where CLs(bi) =CLs+b(bi)
CLb(bi)
Then CLs(bi) =∫
Ls+b(NLLR>NLLRbi)∫Lb(NLLR<NLLRbi)
and NLLRbi= −2 · log(
∏Mi=1 f(bi |si(τ,N))∏M
i=1 f(bi |bi)) which
is also NLLRb ≡ NLLRbi= mean(Lb(NLLR))
(b) The previous step is repeated from the beginning to obtain CLs for NLLRb ± σ and
NLLRb ± 2σ
CLs(NLLRb + σ) =∫
Ls+b(NLLR>NLLRb+σ)∫Lb(NLLR<NLLRb+σ)
= 5%
CLs(NLLRb + 2σ) =∫
Ls+b(NLLR>NLLRb+2σ)∫Lb(NLLR<NLLRb+2σ)
= 5%
CLs(NLLRb − σ) =∫
Ls+b(NLLR>NLLRb−σ)∫Lb(NLLR<NLLRb−σ)
= 5%
CLs(NLLRb − 2σ) =∫
Ls+b(NLLR>NLLRb−2σ)∫Lb(NLLR<NLLRb−2σ)
= 5%
To calculate limits for Nsg respect to the data (10-bin measurement: ni) the whole
procedure is repeated but now looking for CLs(ni) = 5%
CLs(ni) =
∫Ls+b(NLLR>NLLRni
)∫Lb(NLLR<NLLRni
)= 5%
where NLLRni= −2 · log(
∏Mi=1 f(ni|si(τ,N))∏M
i=1 f(ni|bi))
It is easier to show how the p-values were obtained with an example. Figure 37 shows
how to obtain NLLR values for the 95% Confidence Level for background, background ±1σ,
background ±2σ and how much the data tested. Each of these values correspond to a different
number of CHAMPs and it is done for each proposed lifetime (Figure 38).
80
Figure 37. NLLR 95% Confidence Limits: The blue PDF correspond to Lb, the red PDF to
Ls+b. The 5% is obtained from the ratio of the red area over the blue area. Lb does not
change but Ls+b is fluctuated to get the 95% for b ± σ, b ± 2σ and the actual data.
81
Figure 38. NLLR 95% Confidence Limits example: This is an example of a CHAMP with a
lifetime of 0.43s to see how the number of signal required vary to obtain the 95% CL of
background, b ± σ, b ± 2σ and the actual data.
82
Since the lifetime spectrum plots do not show evidence of signals we need a more sensitive
way to quantify the presence of few events if any. That is the reason for choosing the confidence
limits (CLs) from the NLLR test.
The signal lifetimes are selected depending on the range of application of the LLR test
(range of bins). We follow this general criteria:
• We test 10 bins at the same time. We start the bins from the smallest possible time since
a decreasing exponential has the most statistics at the smaller times.
• Every time we want to include longer lifetimes we increase the size of the bins. Therefore
we include all previous data.
• We take the lifetimes bigger than the first bin and smaller than half of the highest bin
center (analysis range).
• When we follow the previous steps we have redundant lifetimes. Then we select, when
possible, the lifetimes between 1/3 to 1/2 of the range Figure 39.
• This chapter data sample has a time window of 164µs. Then when we stop increasing
the size of the bins when we reach the 16µs bin size. In this exceptional case we continue
increasing the signal lifetime but calculating the signal and signal plus background within
the 160µs range.
83
Figure 39. NLLR signal selection
84
4.3 Results for SB and S1’ samples
Before presenting the limits, we provide a short discussion about the background we fit. A
constant background, as in the case of passing-through muons with a constant rate, does not
appear as a constant in a lifetime plot. It is actually an exponential (see Section 5.1):
Nb =NtrNdc
Ntr + Ndc
2 · sinh
(
(Rtr + Rdc)δ
2
)
· exp (−(Rtr + Rdc) · T) (4.9)
Where δ is the bin size, Ntr is the total number of background processes that fake a trigger (for
example (3, 4, 5, 6, 7, 8).
Ndc is the total number of produced background processes that fake a decay signal (for example
counter 5 and e-tag with a veto in the rest of the counters).
Also Rtr = Ntr/T and Rdc = Ndc/T where T is the total running time (approximately 200 days
depending on sample).
This background estimation is tested rigorously in Chapter 5. In the current analysis the
decays we measured need not correspond to a trigger since the time gate 164µs captured every
possible hit. This time gate does not cover the whole running time, it is only opened after a
trigger, thus we do not have a measurement of the total number of decays, therefore we cannot
measure the rate. Nevertheless, we estimate the rate using Equation 4.9, since we can measure
the number of triggers.
Figure 40 shows the result of this background estimation from rates and using the number
of muon decays calculated from the fit. The Montecarlo simulation generates the hits and the
85
reconstruction process is followed as it is done for real data. The agreement is very clear. To
demonstrate how a CHAMP signal might be observed and how it would follow the estimation,
the background plus simulated CHAMP signal is shown in Figure 41 which includes 20,000
Montecarlo-generated CHAMPs (3.5 · 10−2 fraction of muon decays) with a lifetime of 20µs.
The estimation of signal plus background shown in green is done with:
Ns+b = N◦d · exp(−T/τµ) + Nsg · 2 · sinh
(
δ
2τsg
)
· exp(−T/τsg) + Const (4.10)
Where Nsg = 20000 is the total number of CHAMPs generated. τsg = 20µs is the CHAMP
lifetime. δ = 1000ns is the bin size.
86
Data
Montecarlo Simulation
Figure 40. Lifetime background simulation: Data and simulation.
87
Montecarlo Simulation
Figure 41. CHAMP simulation: 20000 CHAMP events with a lifetime of 20µs.
88
To calculate upper limits for S1’ and SB, we follow the procedure explained in Section 4.2.3.
We have 10 bins and the signal lifetime is taken as bigger then the first bin time and smaller
than 1/2 of the last bin center; in that way we always have at least 2 lifetimes in the interval.
The limits are presented as a fraction of signal respect to the total number of stopping muons.
Figure 42 shows the limits applied to the SB sample with no veto (counters 3,4,5 no veto)
and a single electron tag. The sample with no veto is not the cleanest one, but we introduce
it to be able to compare it with the results for large EAS (Section 4.4). The sample with
veto has an improved background rejection (a better ratio between the muon lifetime and the
background) and that improves the CLs limits.
89
Figure 42. 95% Confidence Limits for SB sample: The sample corresponds to (3, 4, 5) incident
muons with no veto but a single electron tag in of the counters (4, 6, 7, 8)
90
The data is limited to 164µs, but we can extend the limit search for that data (Figure 43).
Fluctuations above 2σ may indicate with a 95% CL the presence of another particle with the
lifetime of the location of the fluctuation. Fluctuations below 2σ may be an indication that the
background is not well modeled or may also be meaningful, for instance the muon capture will
show as a lack of signal particles that the background cannot explain.
91
Figure 43. 95% Confidence Limits for S1’ and SB sample: The sample corresponds to (3, 4, 5)
incident muons with no veto but a single electron tag in of the counters (4, 6, 7, 8)
92
Both samples are combined in Figure 44, so we can compare it with the large EAS limits
in Section 4.4.
Figure 44. 95% Confidence Limits for SB and S1’ sample: The sample corresponds to (3, 4, 5)
incident muons with no veto but a single electron tag in of the counters (4, 6, 7, 8)
93
We present the limits for the data sample with the smallest random background relative to
muon decay (Table IV) which includes the stopping muon veto and e-tag (Figure 45). In our
most sensitive regime we rule out the presence of a CHAMP at the 95% CL with a production
rate of less than 2 · 10−4 with respect to the stopping muon decay.
94
Figure 45. 95% Confidence Limits for S1’ and SB sample: The sample corresponds to
(3, 4, 5, 6, 7, 8) stopping muons with veto in counters 6, 7 and/or 8 and with a single electron
tag in of the counters (4, 6, 7, 8)
95
4.3.1 Systematic errors estimation
This CHAMP lifetime search is based exclusively on measuring the time between hits in
counter 5 (W DAQ). The DAQ CPU clock is calibrated to 25 MHz every second by the GPS
PPS signal1. Figure 46 shows the measurement of the number of ticks between PPS signals
obtaining:
24999999.99924 ± 0.00054Hz
That means we have a precision of 40ns in 109ns. Therefore, the absolute time accuracy does
not significantly affect in the lifetime measurements between 2µs and 200µs 2 .
Fluctuations of two standard deviation in the background level (2σ) does not affect the
muon lifetime measurement (main background) as shown in Table VI.
The shape of the background is purely exponential and it has a good agreement with esti-
mation from rates.
Since we have no significant systematic errors beyond the procedure we defined to determine
the lifetime-fit ranges, we use only Poisson statistical fluctuations as nuisance parameters in
the CLs method.
1The PPS is provided by Cesium clocks at the GPS satellites.
2We only use data that has ”good” GPS PPS information.
96
Figure 46. W clock frequency: 24999999.99924 Hz with a standard deviation of 0.70 Hz
(1659000 samples). Every second (the second is defined by the GPS PPS) the number clock
ticks of the DAQ processor is measured and reported.
97
SB background SB < τµ > S1’ background S1’ < τµ >
106.0ns 2113.4 ± 4.0ns 118.3ns 2109.1 ± 3.6ns
97.8ns 2114.6 ± 4.5ns 109.8ns 2109.9 ± 3.7ns
89.8ns 2115.6 ± 4.2ns 101.2ns 2110.4 ± 3.4ns
TABLE VI
BACKGROUND 2σ-FLUCTUATIONS IN MUON LIFETIME: THE MUON LIFETIME IS
MEASURED BETWEEN 400NS AND 10µS AND THE BACKGROUND BETWEEN 100µS
AND 160µS. SB BACKGROUND IS (97.9 ± 4.0)ns AND S1’ BACKGROUND IS
(109.8 ± 4.3)ns
98
4.4 Results with extensive air shower S tag and J tag
The limits for the large EAS were extracted by combining both data samples, SB and S1’.
The background regions were extended to be able to include a minimum of 20 events for S-tag
and 10 events for J-tag (Figure 36). This small data set yields poorer CLs limits, however it
provides new information that CHAMP production is not significantly enhanced at the highest
CR energies.
99
Figure 47. 95% Confidence Limits for large EAS: The sample corresponds to (3, 4, 5) incident
muons with no veto but a single electron tag in of the counters (4, 6, 7, 8)
CHAPTER 5
EXTENDED DECAY TIME SEARCH
In Chapter 4 we set limits for lifetimes smaller than 10 ms because the DAQ time window of
164µs limited decays to that range. In this chapter we extend the stopping particle decay time.
However, to accomplish that we need the decay to trigger the readout. That happens only for
some restricted trajectories. Thus, we can use only a portion of the total dataset (Figure 48).
This demonstrates that the technique can improve CHAMP limits at much longer lifetimes, if
apparatus is upgraded.
100
101
Figure 48. Extended-time possible trajectories: The stopping particles (i.e. muons) and the
decays (i.e. electrons) have to trigger the W (red) and the T (yellow) readouts. For W two
counters are needed and for T only one since W triggers also T.
102
We use the same trigger condition for the ”muon” we used in Chapter 4: counters 3, 4 and
5 with veto in counters 6, 7 or 8. But the ”electron” is defined in the way that it also triggers
the readout, so separate ”muon” and ”electron” events can be combined to probe long times.
To satisfy the trigger we need two counters in the W read out and only one in the T read out;
we can see in Figure 48 that the decay product must hit counter 5 and two sides of opposite
readout.
The reconstruction software was specially developed to search for any particle definition
even inside of the special time window and in multiple events; in that way we avoid gaps in
coverage between the single-event and the multiple-event sample1. The same reconstruction
software is used for the decays in the time window in Chapter 4 to catch all possible decays2
and all possible background hits. Our goal is to search for CHAMP with lifetimes significantly
larger than those measured in Chapter 4. We investigate decay times larger than 50 µs in order
to be insensitive to muon decays.
5.1 Background estimation
The background in this longer lifetime measurement cannot be considered as constant for
long times even though we were able to treat it as constant in Chapter 4. In general it is an
exponential that we can calculate from our measured ”muon” and ”electron” rates.
1Each line of the reconstructed file (fmap) contains a trigger and all the hits within the time gatewindow by the readout.
2It is unlikely to have two stopping muons in the same time window of 164 µs since the lifetime rateis about 0.5 MHz and the stopping muon rate is O(1/min) even for a running time of a year.
103
Let us consider a background-only sample. Then we have a constant rate for the stopping
”muon” and a constant rate for the decay ”electron”. These rates are uncorrelated (indepen-
dent). But we still look for decays and we take the time difference between a ”muon” and the
nearest ”electron”. Intuitively we may think that this time difference is a constant but it is
not; it is exponential.
Let us first calculate the probability of not getting a hit in a time interval for a constant
rate rA1 process (43, chapter-4.5). Indeed, that is the probability of not having hits in a time
interval ∆t. If ∆t is very small we get:
P(A,∆t) = 1 − rA · ∆t (5.1)
To calculate the probability for a finite time t, we consider n time intervals ∆t. Thus the
probability does not depend on time (independent probabilities) and we can multiply them:
P(A,n∆t) = (1 − rA · ∆t)n = (1 − rA · ∆t)n
rA∆t
rA∆t (5.2)
(1 − rA · ∆t)n
rA∆t
rA∆t −→ exp (−rA · t)
∆t → 0, n → ∞
n∆t → t
(5.3)
1The rate is the number of hits per unit of time rA = NA/Tr.
104
Normalizing we obtain:
P(A, t) = rAexp (−rA · t) ≡ NA
Tr
exp (−rA · t) (5.4)
Now, going back to the initial situation, we have two independent1 constant rates that inter-
fere; the rate of stopping muons rST and the rate of decay candidate electrons rDC. Therefore
we have two cases after a ST event occurs, or a DC event occurs or another ST event; the
probability of no events in a time interval t:
P(DC ∪ ST |ST, t) =P((DC ∩ ST) ∩ ST, t)
P(ST, t)(5.5)
Since ST and DC are independent:
P((DC ∩ ST) ∩ ST, t)
P(ST, t)= P(DC, t) · P(ST , t) (5.6)
And using Equation 5.4
P(DC ∪ ST |ST, t) =NDC
Tr
exp(−rDC·t)·NST
Tr
exp(−rST ·t) =NDCNST
T2r
exp(−(rDC+rST)·t) (5.7)
1Independent in the case of no decays. The number of electrons produced in a decay do depend uponthe number of muons.
105
The total number of background events can be calculated integrating:
NBGtot = Tr ·∫∞
0
NDCNST
T2r
exp(−(rDC + rST) · t)dt =NDCNST
(NDC + NST)(5.8)
Thus, the number of background events as a function of time is:
NBG =NDCNST
(NDC + NST)· exp(−(rDC + rST) · t) (5.9)
For a discrete histogram (see Appendix C)
NBGd =NSTNDC
NST + NDC
2 · sinh
(
(rST + rDC)δ
2
)
· exp (−(rST + rDC) · T) (5.10)
Where δ is the bin size and T the time at the center of each bin. Notice that NBGd depends
on NST, NDC and Tr i.e. NBGd(NST,NDC, Tr)
Figure 49 shows the agreement between simulated background and the estimation proposed
at Equation 5.10.
The data also shows good agreement with the background estimation (Figure 50 and Fig-
ure 51)
106
Figure 49. Background estimation of simulation: Constant rate generated data for muon and
electron. The bin size use is 5 · 10−4 s. Notice that this background will appear constant in
the 0 − 200µs range.
107
Figure 50. Data, simulation and estimation for background.
108
Figure 51. Data with estimation for background subtracted: Notice that data is plotted from
0s to 5s to show the agreement with more detail.
109
Figure 52 shows that the background predicted from rates also predicts an almost constant
background in the 0 − 300 µs. Notice that there is no discontinuity after the 164 µs limit;
showing agreement between the background measured in the ”all hit” region and the trigger
dependent long times.
Figure 52. Estimation of background for short lifetimes: Notice the continuity of the data
after 164 µs.
110
5.2 Signal plus background estimation
If NDC includes a number of real decays of a particle NSG with a lifetime τSG then we
obtain:
NSB+BG = NmSG2 sinh
(
δ
2τmSG
)
exp
(
−t
τmSG
)
+ NBGd(NmST,Nm
DC, Tr) (5.11)
where
NmST = NST − NSG (5.12)
NmDC = NDC − NSG (5.13)
NmSG =
(
1
NDC − NSG
+1
NSG
)−1
(5.14)
τmSG =
(
NDC − NSG
Tr
+1
τSG
)−1
(5.15)
111
5.2.1 Systematic errors estimation
For the extended lifetime analysis, the main systematic error source is the accuracy and
precision in the measurement of the time because the lifetime measurement is based exclusively
in measuring the time between hits in counter 5. We have a very accurate time determination
and a precision of 40ns in 109ns, see Section 4.3.1. Therefore, the absolute time accuracy and
precision does not have an affect in the lifetime measurements between 2µs and 0.1s
The shape of the background is purely exponential and it has a good agreement with esti-
mation from rates as shown in Figure 50 and Figure 51.
Since we have no significant systematic errors we use only Poisson statistical fluctuations
as nuisance parameters in the CLs method.
5.3 Confidence limits determination applied to very long lifetimes
The method to calculate the confidence limits is basically the same method developed in
Section 4.2.3. The only differences are the measurement and calculation of rates, and that there
is no time window for the lifetime data; the lifetime data extends along time depending on the
amount of statistics collected.
We clean one-hour data chunks if they disagree more than four standard deviations from the
mean value. The estimation of background and background plus signal histograms is calculated
from measured rates.
The limits are consistent with the background hypothesis. Limits for SB 568 corner are
presented at Figure 53 and for S1’ 567 corner at Figure 54.
The 2σ excess of signal shown in Figure 53 is not present in the S1’ sample (Figure 54)
112
Figure 53. Long lifetime limits for SB.
113
Figure 54. Long lifetime limits for S1’.
CHAPTER 6
CONCLUSIONS
The amount of data analyzed presents no evidence of a new particle in the decay spectrum
of stopped charged particles in cosmic rays at sea level.
We set a 95% confidence level upper limits of production over a broad range of lifetime
hypotheses from 5µs to 0.3s, normalized to the observed muon decay rates.
In Chapter 4 we show the sensitivity of the fitting technique (Appendix B), measuring the
µ− capture rate and later the µ+-µ− West-East asymmetry.
In Chapter 4, Figure 45 shows our best CHAMP limits obtained with the largest subset of
data: a stopping ”muon” with veto and ”electron” tag. The dashed line shows the expected
limit for background and the continuous line, the observed limit. For example, we observed less
than 4 ·10−4 of the production rate of decaying muons, for a new particle with lifetime between
7µs and 100µs.
In Chapter 4, Figure 47 shows the CHAMP limits obtained for large EAS, i.e. showers
produced by an incident CR with energy larger than 100 TeV(34).
In Chapter 5 we developed a new technique to extend the search to very large lifetimes
(VLL) on the order of seconds, however could be much more sensitive with an efficient trigger.
A new detector is being constructed to take advantage of this new technique.
114
APPENDICES
115
116
Appendix A
SETUP HISTORY
A.1 Experiment configurations
In addition to the S1’ and SB configurations we also ran with two previous configurations:
S2 and S1. Each era consists of three or four detector arrays connected to four Data Acquisition
Cards (DAQs); in Section 3.1 we give a full description of the readout system.
Counters are labeled d1 to d8 (Figure 55). Overlaid in the space between counters d1 and
d3 are representations of the counter mapping onto the input channels c1 to c4 for the T DAQ
(yellow) and W DAQ (red). Each DAQ card requires two counters to fire within a trigger gate
to initiate the readout.
All the setups have the capability of measuring time of flight (TOF) but with a limited
angular acceptance; about 10% of the incoming particles go through TOF counters as well (d1
and d3 which are separated 2.34m in S1, S2 and S1’ and 2.38m in SB).
117
Appendix A (Continued)
Figure 55. Experimental evolution sketch (front view), left to right eras: S2, S1, S1’ and SB.
Each of them utilizes two DAQ readout cards W and T. Trigger requires two hits in each
DAQ card. In S1’ and SB the W DAQ trigger output supplies a T input.
118
Appendix A (Continued)
I) S2:
S2 era consists in 4 arrays: W, T, S and J. However, since S and J remain the same through
the four eras, we focus in W and T.
The trigger of S2 requires an in-time1 hit in two counters of both DAQs W and T, four
counters in total. Originally the counter d2 Figure 55 was separated vertically from d3 to
improve the veto power of d6 by limiting the angular acceptance to pass through d6. If we set
the trigger to be d2, d3, d4 and d5 we can use any of the extra counters d6, d7 and d8 as vetoes
since the trigger already satisfies both T and W requirements.
• Trigger for stopping muon: Counters d2, d3, d4 and d5 with a veto in d6 or 7d or d8
(2, 3, 4, 5, 6, 7, 8)
• Electron: A hit following the stopping muon at counter d5. It has no trigger requirement,
since the DAQ W time gate is used to record any hit in counter d5 after a trigger.
• TOF: Counters d1 and d3 in-time.
• S2 allowed us to study decays from W array alone:
W trigger for stopping muon: d2 d5 plus a d6 veto (2, 5, 6)
The acceptance of W array alone is limited by the separation of d2 and d5 but that allows
us to use d6 as an efficient veto because all passing through muons (2, 5) are geometrically
force to pass through d6.
1Two counters in-time means that the times of the leading edges of both signals are within 75ns,about 3 sigmas from the central value far all possible pair.
119
Appendix A (Continued)
II) S1:
Once we decided to use the veto with the combination of d6, d7 and d8 and not just d6
by itself we reorganized the counters into S1 configuration; with d4 and d5 close to each other.
W has a bigger acceptance and does not limit the acceptance imposed by the 6,7,8 veto, but
to use the d7 veto d2 must be present, therefore, S1 has the advantage over S2. The rates
in Table VII show that the W-T in-time track d2 d3 d4 d5 veto d6 d7 d8 has no significant
difference between S1 and S2.
• Trigger for stopping muon: (2, 3, 4, 5, 6, 7, 8)
• Electron: A hit following the stopping muon at counter d5. It does not require trigger of
the W readout.
• TOF: Counters d1 and d3 in-time.
W array by itself has a reduced veto power since there are geometric possibilities for the
muons to pass through d4 and d5 and not through d61.
III) S1’:
S1’ combines both W and T arrays into a single W-T array. The trigger of W is used as an
input of T relaxing the T readout requirements from 2 hits to only 1 hit.
S1 was using d2 to be able to use d7 as veto. Therefore d2 was necessary for the stopping
muon trigger and it was limiting the acceptance. S1’ does not have the d2 limitation and it has
1Poor veto power means that many passing muons are considerer as stopping ones.
120
Appendix A (Continued)
a better veto efficiency since any W trigger is part of the T readout. The correlation of W and
T1 also becomes more efficient.
Also S1’ incorporates the possibility of tracking the electron out of the muon decay (electron
tag). The electrons come out d5 in a straight line, in all possible directions 2. The electron tag
is define with the hit in d5 and only one extra hit in a counter around d5 within a time window
(except for counter d3 which is taken with d4 all together); all the remaining counters are use
to veto passing-through particles.
• Trigger for stopping muon: (3, 4, 5, 6, 7, 8) counters d3, d4 and d5 with a ”U” veto (veto
on d6, d7 and d8)
• Electron: A hit following the stopping muon at counter d5. It does not require trigger of
the W readout. We use the counters around d5 to track the electron and we define the
electron tag (e-tag). If the electron appears in counter d5 and no other counter around
we call it isolated (e-iso):
• e-tag: (3, 4, 5, 6, 7, 8) or (5, 6, 3, 4, 7, 8) or (5, 7, 3, 4, 6, 8) or (5, 8, 3, 4, 7, 6)
• e-iso: (5, 3, 4, 6, 7, 8)
• TOF: Counters d1 and d3 in-time.
1The data correlation of W-T array is done by software, see Section 3.1.2.
2Some of the electrons do not leave d5, see Table V at Section 4.2.2.1
121
Appendix A (Continued)
IV) SB:
Using the S1’ configuration we increased the purity of the decay sample by requiring, not
only a second hit in d5 but another hit in d4, d6, d7 or d8 at the same time (counters around
d5, see S1’ in Figure 55); however d4 did not completely cover the top of d5. We modified the
configuration one last time so that d4 d6 d7 d8 formed an almost complete box around1 (see
SB in Figure 55) d5. Approximate 60% of muon decays fire one of the counters of this box
defining the electron tag. Since most of the data was collected in the S1’ and SB configuration,
and the increase in electron purity improved our signal to background discrimination, the rest
of this analysis uses only those two eras.
• Trigger for stopping muon: (3, 4, 5, 6, 7, 8)
• Electron: A hit following the stopping muon at counter d5. It does not require trigger of
the W readout. We use the counters around d5 to track the electron and we define the
electron tag:
• e-tag: (3, 4, 5, 6, 7, 8) or (5, 6, 3, 4, 7, 8) or (5, 7, 3, 4, 6, 8) or (5, 8, 3, 4, 7, 6)
• e-iso: (5, 3, 4, 6, 7, 8)
• TOF: Counters d1 and d3 in-time.
Notice that the electron track may trigger W-T if it hits the lower corner d7, d6 and d5 for
S1’, because it has two W counters and one T. SB also has a trigger for the electron if it hits
1We refer to it as e-box
122
Appendix A (Continued)
either of the lower corners (d7, d6 and d5 or d8, d6 and d5). But only about 4% of the decays
will have a electron tag with that characteristics. This subset of events allowed us to develop
a technique to search for CHAMPs with lifetimes up to 0.1s which is discuss in Chapter 5. We
have constructed an array that maximizes the electron tag trigger to allow an improvement in
to very long lifetime production limits (or signal) in the future.
123
Appendix A (Continued)
S2 S1 S1’ SB
Muon Rate [Hz] 2.180±0.003 2.147±0.002 4.179±0.001 2.948±0.001
Track (counters) d2 d3 d4 d5 d6 d2 d3 d4 d5 d6 d3 d4 d5 d6 d3 d4 d5 d6
Counters not present veto d7 d8 veto d7 d8 veto d7 d8 veto d7 d8
Decay Rate [Hz] 0.0132±0.0002 0.0128±0.0002 0.04293±0.00007 0.04283±0.00005
Track (counters) d2 d3 d4 d5 d2 d3 d4 d5 d3 d4 d5 d3 d4 d5
Counters not present veto d6 d7 d8 veto d6 d7 d8 veto d6 d7 d8 veto d6 d7 d8
Run Duration 65 days 75k decays 30 days 30k decays 180 days 500k 215 days 700k decays
TABLE VII
EXPERIMENT EVOLUTION RATE COMPARISON: THE TRACKS FOR THE
PASSING AND STOPPING MUONS WERE TAKEN WITHIN 100NS TIME GATE. THE TRACKS
SHOWN IN THE DECAY BOXES CORRESPOND TO THE STOPPING MUON. TO CALCULATE
THE DECAY RATE, ANOTHER HIT WAS MEASURED IN D5 (ELECTRON TIME) AND THE
TIME DIFFERENCE BETWEEN THE STOPPING MUON AND THE ELECTRON WAS
PLOTTED AND FITTED (AFTER BACKGROUND SUBTRACTION) GIVING THE TOTAL
NUMBER THE MUON DECAYS; THEN THE RATE IS THE RATIO OF THAT NUMBER AND
THE RUNNING TIME
124
Appendix A (Continued)
Final setup: S1’ and SB
The experiment consists of three detector arrays: W-T, S and J. Where W-T is a stack
of seven or eight counters; S and J are each made of four horizontally distributed counters as
shown in Figure 10. The counters used were made of plastic scintillation material connected
to 1”, 2” and 3” PMTs. Data was collected on independent readout systems consisting in
4-channel Fermilab Quarknet DAQ cards.
125
Appendix B
FITTING DEVELOPMENTS
This chapter is about two novel fitting developments. The first is an improved treatment of
uncertainties calculated with least squares curve fitting. The second is an improvement of the
actual fitting parameters of smooth curves like exponential decays.
The motivation of this development started when we were trying to find the best way to
fit an exponential decay; i.e. the optimal number of bins while keeping enough statistics per
bin. We discovered that standard least squares (LS) fitting requires an arbitrary criterion to
decide what is the optimal bin size and therefore to produce meaningful results. We faced the
problem that the fitting results depended on the number of bins, giving ”bad” fit results when
the number of counts per bin is ”small”, but also when we use too few bins.
Here is an example that shows the same behavior we faced with real data. We used a
simulation because we want to know exactly what to expect from the fit. Figure 56 shows a
simulation of 20,000 particles with a lifetime of 2,000 [a.u.]1 and we selected the region from 0
a.u. to 10,000 a.u. to fit the lifetimes. We modify the bin size to increase the number of fitting
points.
1The data is generated randomly thus we prefer to use the nomenclature: Arbitrary Units (a.u.)instead of a temporal unit.
126
Appendix B (Continued)
Figure 56. Decay simulation of 20,000 events with a lifetime of 2,000 a.u.. We choose the range
from 0 a.u. to 10,000 a.u. and change the bin size to increase the number of fitting points.
127
Appendix B (Continued)
We fit with weighted least squares method (LS) widely used in the field for its reliability
and statistical meaning1 (44, chapter-7)(36, chapter-4). Figure 57 shows the lifetime fits plotted
versus the number of bins. We can see there is a tendency to go down when the statistics per
bin is small, and the error bars do not include the expected value with more than 100 bins
(nb>100). For nb=100 the biggest bin has around 1000 counts and the smallest around 10
counts with an average of 200 counts and we can still see the exponential shape (Figure 56).
1LS fitting parameters can be derived from the maximum likelihood
128
Appendix B (Continued)
Figure 57. Lifetime versus number of bins (nb).
129
Appendix B (Continued)
Figure 58 shows just the uncertainties of the lifetime fits (the error bars at Figure 57). The
plot shows that the fluctuation on the uncertainties are worse when a smaller number of bins
is used which is what we intuitively expect. It can be shown that it follows approximately:
εσ =1√
2nb − 1
Therefore, we have to choose a middle point were we get a reliable lifetime and reliable error,
nb=50 has both characteristics in this case, the first bin has about 2000 counts and the last
about 20 counts (Figure 56). But there is no real statistical argument to choose a particular
bin.
The methods we propose in the next 2 sections solve both problems; it produces a reliable
uncertainty and a reliable lifetime independent of the number of bins (Figure 59). Notice that
the lifetime shown is consistently larger than the expected, this is expected and it is not a
bias since the same data is used for all fits, only the bin size is changed; a single experiment
yields a lifetime which does not have to be exactly the expected valued but within statistical
fluctuations; the value of the lifetime is then fixed after the measurement and it should be
independent of the number of bins choose for the fit.
130
Appendix B (Continued)
0 50 100 150
10
12
14
16
18
20
22
24
26
28
Number of bins
Life
time
erro
r fr
om L
S fi
t (∼g
a) [a
.u.]
Lifetime error (σ) from LS fitAverage Lifetime error (<σ>)<σ> (2 nb −1)^−0.5
Figure 58. Lifetime uncertainty (σ) versus number of bins (nb).
131
Appendix B (Continued)
0 50 100 150
10
12
14
16
18
20
22
24
26
28
Number of bins
Life
time
erro
r fr
om M
odifi
ed S
tabi
lized
LS
fit [
a.u.
]
Lifetime error (σ) from Modified Stabilized LS fitAverage Lifetime error (<σ>)<σ> (2 n
PE −1)^−0.5
Figure 59. Modified Stabilized LS fit versus number o bins generated with 100 pseudo
experiments (PE)
132
Appendix B (Continued)
B.1 Improvement of uncertainties of fitted parameters with pseudo experiments
The uncertainties calculated in maximum likelihood fits parameters (i.e. least squares) are
calculated from the dispersion of data related to the fitted values without taking into account
the knowledge of the uncertainties of the measurements (45). If the uncertainties are not all the
same, they can be weighted but it is solely used to rank the points relatively to each other. In
the case of a particle lifetime measurement histogram, the counts of each bin follow a Poisson
distribution with a standard deviation equal to the square root of the counts (46); therefore
each bin is a measurement with well known uncertainty, and this proposed modification in the
least squares method takes advantage of that extra information.
In this section we are interested in improving the uncertainties of the fitted parameters. For
instance, in the case of the measurement of a given magnitude with a Gaussian distribution, the
uncertainties of the fitted value also fluctuate and its uncertainty go with 1/√
2 · (nb − 1)·100%
(standard deviation of σ) where nb is the number of of bins or points to be fitted (36, Chapter 4,
p. 90). This is a disadvantage when looking for well defined errors in the fitted parameters.
For example, the muon lifetime measured with 5 bins will have an error with an uncertainty
of about O(35%); and this fluctuation on the error is independent of the size of the un-binned
sample.
This is a big issue when the fitting is done on a small sample. Our lifetime measurements
are binned and the number of bins are limited from the total sample statistics, in that case the
bin sizes cannot be reduced and the number of fitting points has to be reduced.
133
Appendix B (Continued)
To solve this problem, and therefore get an accurate measurement of the uncertainties of the
fitting parameters, we conduced a O(100) set of pseudo-measurements using a random number
generator 1 for each data point; using the value of the data as the mean value of the generated
random distribution. Then we produced hundred equivalent experiments that are fitted one at
the time; in this way we have a set of hundred fitted parameters that form a distribution; since
the fitting parameters are also random variables themselves, we are getting their distribution
and therefore we can get the mean value and the standard deviation as the uncertainty. The
number of pseudo-experiments give us some control on how precisely we know the uncertainties;
to show that the same experiment was fitted with an increasing number of pseudo-experiments
in the fit (Figure 602).
1The random numbers have to be generated following the statistical distribution known for the data.In the case of counting that is a Poisson distribution.
2Each set of pseudo-experiments was independently done to avoid correlations between data points.
134
Appendix B (Continued)
Figure 60. Modify fit uncertainty vs. number of pseudo-experiments (PEs) used in the fit:
The plot shows the uncertainty (standard deviation) of the lifetime of an experiment fitted
with different number of pseudo-experiments. Notice that the uncertainty of the uncertainty
follows the expected σ√2(n−1)
.
135
Appendix B (Continued)
This method may also allow us to include systematic errors in the fit. These errors can
be Gaussian, uniform distributed or any other distribution. This is a another improvement
because there is no way to introduce systematic errors in the least squares method without
redoing the whole algorithm for each particular case.
The method showed great results in simulations, giving the expected results for the fitted
parameters and the errors. As an example, we show the application of this method in an
exponential decay of 2000 particles with a lifetime of 2000 [arbitrary units]. Five bins are taken
as shown in Figure 61.
Figure 61. Simulation of decay of 2000 particles with a lifetime of 2000 a.u.
136
Appendix B (Continued)
To do the comparison we throw a 300 sets of this same experiment (randomly generated)
with exactly the same lifetime and number of particles. The lifetime fitted with the regular least
squares method has a distribution centered at 2000 a.u. and a standard deviation of about 100
a.u., blue histogram in Figure 62. Figure 61 shows the lifetime plot of one of the experiments
of Figure 62 and its lifetime and standard deviation is shown in red. In this case, that value
has significant differences with the actual lifetime because the lifetime uncertainty of about 50
which is almost half of what it supposed to be. Finally fitting with the modified least squares,
the resulting lifetime is not changed, but the uncertainty is about 100 which is the expected
value (green).
137
Appendix B (Continued)
Figure 62. A set of simulated lifetime experiments: 300 simulated lifetime experiments, each
with 2000 events and lifetime of 2000 a.u. The blue histogram shows the lifetime fitted with
the regular least squares for 2000 experiments; the mean value (black) and the standard
deviation (black arrows). In red the lifetime measured of one of this experiments. In green the
same experiment but fitted with the modified method using 300 Poisson generated pseudo
experiments to improve the standard deviation; notice that the mean value is similar to the
one obtain with the regular least square method but the standard deviation is about the
expected value (black)
138
Appendix B (Continued)
To show that this is not by chance we run both fits with the regular least squared method
(red) and the proposed modification (green) on each experiment of Figure 62. In Figure 63
we show the standard deviation of the regular least square fit (red) and the proposed modified
method (green). It is clear to see here that about all experiments show a standard deviation of
about 100 independently of the lifetime fitted.
Figure 63. Standard deviation comparison: Each experiment is fitted with the traditional
least squares method (red) and the modified method. The fit is done with 300 pseudo
experiments (PE) for the modified fit.
139
Appendix B (Continued)
B.2 Fitting weight stabilization (Lifetime fitting bias fix)
This method is useful for fitting Poisson distributed data and it has a big impact with low
statistics data points since they can modify the global fit.
Fitting an exponential is challenging since an exponential histogram has very different errors
for different data points. These points are weighted assuming they are Poisson distributed; that
means that their weights are taken as the inverse of their variances (variance = σ2 = N and
w = 1N
). Poisson distributed data has, with good approximation, the median equal to the
mean, that means that there is about a 50% chance to measure a bigger number than a smaller
number; but the smaller numbers have more weight for the fit, since small fluctuations means
big weights (σ =√
N). Therefore, if a data point is above the mean, it is going to be weighted
less that if it falls below it, even though they should be weighted the same on average. Therefore,
the lifetime measured has a bias for smaller values (Figure 65), and this effect is bigger for data
points with low statistics, see Figure 64.
140
Appendix B (Continued)
Figure 64. Weights for randomly distributed Poisson data: Data one standard deviation (1σ)
below the mean weights more than data above the mean.
141
Appendix B (Continued)
Figure 65. Fit bias in a lifetime measurement: The graph shows a simulated lifetime
experiment with a lifetime of 2000 with 2000 events. About half of the data points fall above
(green) the theoretical mean and half below it (red). The error of the points below the mean
are smaller and therefore they weight more than the ones above.
142
Appendix B (Continued)
The method we introduce to solve this bias is to recalculate the weights from the fit and not
from the data. This is an iterative method that takes the weight from the data as first step;
after the first fit, the fitted curve is used to recalculate the weights. This procedure is repeated
over again until the fitted parameters converge with a stipulated precision which we want much
smaller that the statistical fluctuations (in our case we take it as 0.01%).
The least squares method yields a lifetime of 1992 ± 53 a.u. for the example showed at
Figure 65, which is within the expected value of 2000. The least squares modified and stabilized
gives a lifetime of 2006 ± 57 a.u. which also is within the expected value. Even though there
is a small difference it is not significant, but if we have a number of experiments all under the
same conditions, the original least squares method shows a bias. To prove that we run 100, 300,
500 and 1000 pseudo experiments and we calculate the least squares fit for each individually
to see what distribution they form. We did the same thing but with the least squares modified
stabilized method and we see that the distribution systematically includes the expected value
but the theoretical least squares fit shows several standard deviation significant differences with
the expected (Figure 67).
143
Appendix B (Continued)
Figure 66. Fit bias method comparison for 100 pseudo-experiments: There are three sets of
100 pseudo-experiments (PEs) each. The graph shows that the least squares (LS) fit has more
than two standard deviations difference from the expected value. The stabilized modified
method has not significant differences with the expected lifetime and shows no preference for
being smaller nor bigger. Notice that modified method without the stabilization shows the
same bias than the LS method. Notice than the error in the mean value for 100 experiments
goes with σ√100
where σ is the error for a single experiment (average of all the standard
deviations). The blue histogram is made using the LS result for each PE.
144
Appendix B (Continued)
Figure 67. Fit bias method comparison with increasing number of pseudo-experiments: There
are three sets of pseudo-experiments (PEs), 300, 500 and 1000. The graph shows that the
tendency that least squares (LS) fit has with 100 PEs remains and it shows up to 9σ
difference with 1000 PEs. But the stabilized modified method has not significant differences
with the expected lifetime. Notice that modified method without the stabilization shows the
same bias than the LS method. Notice than the error in the mean value for 100 experiments
goes with σ√100
where σ is the error for a single experiment (average of all the standard
deviations). The blue histogram is made using the LS result for each PE, notice that it is
centered at the LS mean value.
145
Appendix B (Continued)
Notice that all the data points at Figure 65 have more than 20 counts except for the last
one and still it introduces a bias. This problem get worse with smaller statistics. For instance
with half of the events, 1000 events we see that the bias is more pronounced, see Figure 68.
146
Appendix B (Continued)
Figure 68. Fit bias with reduced statistics:
147
Appendix B (Continued)
B.2.1 Self consistency test
To test the self consistency of each method we do 100 pseudo experiments of 1000 events each
and compare the result with the full sample experiment of 100000 events. The lifetime expected
is 2000. The Table VIII shows that regular least squares method have significant differences
respect to the full experiment fit but the stabilized modified method shows agreement.
Least Squares Fit
Traditional Modified Modified Stabilized Expected
100 PEs of 1000 events each:1969.0 ± 6.7 a.u. 1983.9 ± 6.8 a.u. 1997.3 ± 6.6 a.u. 2000 a.u.
< τ > ± <σ>√100
100000 event sample: τ ± σ 1995.6 ± 7.0 a.u. 1994.9 ± 8.0 a.u. 1995.9 ± 7.6 a.u. 2000 a.u.
TABLE VIII
FIT SELF CONSISTENCY TEST: THE LIFETIME (τ) MEASURED WITH THE
TRADITIONAL LEAST SQUARES FIT ON 100 EXPERIMENTS OF 1000 EVENTS
EACH, SHOWS SIGNIFICANT DIFFERENCES WITH THE MEASUREMENT FROM
THE COMBINED 100000 EVENT EXPERIMENT. THE MODIFIED STABILIZED FIT IS
CONSISTENT IN BOTH MEASUREMENTS AND WITH THE EXPECTED VALUE.
148
Appendix C
NUMBER OF DECAYS FROM A DISCRETE EXPONENTIAL FIT
C.1 From continuous to discrete fitting
In this thesis we fit histograms a number of times; histograms are binned data and they
follow is a discrete function.
For a single exponential and a constant background, the discrete fitting function we propose
is:
Nd = N◦d · exp(−T/τ) + Cd (C.1)
This function comes from the integration of the ideal continuous exponential N◦exp(−t/τ)+
C
∫T+b2
T−b2
(N◦exp(−t/τ) + C)dt = N◦τ2 sinh(b
2τ) · exp(−T/τ) + Cb (C.2)
Where b is the bin size and T is the center of the bin. This proves that the discrete function
Equation C.1 is an exponential and the relation with the continuous function is:
N◦d = N◦ · τ · 2 · sinh(b
2τ) (C.3)
Cd = C · b (C.4)
149
Appendix C (Continued)
The total number of decays is the integral of the continuous exponential with the constant
background subtracted.
∫∞
0
N◦exp(−t/τ)dt = N◦ · τ ≡ Ntot (C.5)
Since the parameters we fit are N◦d and Cd, the total number of decays can be rewritten
as:
Ntot =N◦d
2 · sinh( b2τ
)(C.6)
Notice that Equation C.6 becomes Ntot ≃ N◦d ·τ/b in the limit b ≪ τ. This does not apply
much in our cases where we have bins of about 1000 ns and τ about 2000 ns.
C.1.1 Uncertainty propagation
Since we have an analytic expression of Ntot we can obtain its uncertainty, propagating the
uncertainties of the fitted parameters σN◦d, στ and σb. Assuming they are not correlated we
obtain:
εNtot=
√
ε2N◦d
+ (ετ ·b
2τ tanh( b2τ
))2 + (εb · b
2τ tanh( b2τ
))2 (C.7)
Where the relative error of a parameter X is: εX ≡ σX/X
The bin size is basically a time difference then its uncertainty is σb =√
2 · 1.25 ns; where
1.25 ns is the least count of the DAQ.
150
Appendix C (Continued)
C.2 Off-centered bins
Does the lifetime fit change if we take the time (discrete independent variable) at the center
of the bins or choose another time?
This question is motivated by the fact that the bins are not uniformly populated in an
exponential function: most of the events are closer to one of the ends of the bins.
The answer to this question is ”no, it does not”. If we repeat the procedure followed at
Section C.1 but with an off centered time (T) we still get an exponential with lifetime τ. But
N◦d changes and the integration have to be offset as well to obtain the right total number of
decays.
Given f a real number between 0 to 1 (f ∈ (0, 1)) that indicates the position of T from the
lower end of the bin; for example f = 0.8 means that T is 80% closer to the bigging of the bin.
In Section C.1 we use f = 1/2 since the time T is at the center of the bin.
Nd =
∫T+f·b
T−(1−f)·b
(N◦exp(−t/τ) + C)dt = [N◦τ · exp((2f − 1) · b
2τ) · 2 sinh(
b
2τ)] · exp(−T/τ) + Cb
(C.8)
Then Nd can be written as:
Nd = N◦d · exp(−T/τ) + Cd (C.9)
Where
151
Appendix C (Continued)
N◦d = N◦τ · exp((2f − 1) · b
2τ) · 2 sinh(
b
2τ) (C.10)
and
cd = C · b (C.11)
Notice that with f = 1/2 we get back the result obtained in Equation C.3
152
CITED LITERATURE
1. Cline, D.: Proceedings of the 1st International Symposium on Sources of Dark Matter inthe Universe: 16-18 February 1994, Bel Air, California. World Scientific, 1995.
2. Fairbairn, M., Kraan, A., Milstead, D., Sjostrand, T., Skands, P., and Sloan, T.: Stablemassive particles at colliders. Physics Reports, 438(1):1–63, 2007.
3. Kamimura, M., Kino, Y., and Hiyama, E.: Big-Bang Nucleosynthesis Reactions Catalyzedby a Long-Lived Negatively Charged Leptonic Particle. Progress of TheoreticalPhysics, 121(5):1059–1098, 2009.
4. Jittoh, T., Kohri, K., Koike, M., Sato, J., Shimomura, T., and Yamanaka, M.: Big-bang nucleosynthesis and the relic abundance of dark matter in a stau-neutralinocoannihilation scenario. Phys. Rev. D, 78:055007, Sep 2008.
5. Takayama, F.: Extremely Long-Lived Charged Massive Particles as A Probe for Reheatingof the Universe. Phys.Rev., D77:116003, 2008.
6. Halzen, F. and Mart, A. D.: Quarks and Leptons. Wiley, 1984.
7. Griffiths, D.: Introduction to Elementary Particles. Wiley-VCH, 2nd edition, 2008.
8. Abreu, P. et al.: Update on the correlation of the highest energy cosmic rays with nearbyextragalactic matter. Astropart.Phys., 34:314–326, 2010.
9. Groom, D. E., Mokhov, N. V., and Striganov, S. I.: Muon Stopping Power and RangeTables 10 MeV-100 TeV. Atomic Data and Nuclear Data Tables, 78:183–356, July2001.
10. Nakamura, K. et al.: The Review of Particle Physics. J. Phys. G 37, 075021, 2010 and2011 partial update for the 2012 edition. Particle Data Group.
11. Sanuki, T.: Cosmic ray data and their interpretation: about BESS experiment. NuclearPhysics B - Proceedings Supplements, 175-176:149–154, 2008. Proceedings of theXIV International Symposium on Very High Energy Cosmic Ray Interactions.
153
12. Suzuki, T., Measday, D. F., and Roalsvig, J. P.: Total nuclear capture rates for negativemuons. Phys. Rev. C, 35(6):2212–2224, Jun 1987.
13. Chitwood, D. et al.: Improved measurement of the positive muon lifetime and determinationof the Fermi constant. Phys.Rev.Lett., 99:032001, 2007.
14. Tsuji, S., Iyono, A., Liang, S., Matsumoto, H., Morita, T., Nakatsuka, T., Noda, C., Ochi,N., Okei, K., Okita, M., Takahashi, N., Wada, T., Yamamoto, I., and Yamashita,Y.: The atmospheric muon flux and charge ratio using a magnet spectrometer.Nuclear Physics B - Proceedings Supplements, 175-176:358–361, 2008. Proceedingsof the XIV International Symposium on Very High Energy Cosmic Ray Interactions.
15. Bjornboe, J., Damgard, G., and Hansen, K.: Search for Long-lived Heavy Particlesin Cosmic-Ray Events at Energies Above Several Thousand GeV. Nuovo Cim.,B53:241, 1968.
16. Mincer, A., Freudenreich, H., Goodman, J., Tonwar, S., Yodh, G., et al.: Search for heavylonglived particles in High-Energy Cosmic Rays. Phys.Rev., D32:541–546, 1985.
17. Sakuyama, H. and Watanabe, K.: Heavy particle with long life in cosmic rays above10**17-eV. Lett.Nuovo Cim., 36:389, 1983.
18. Badier, J. et al.: Mass and lifetime limits on new longlived particles in 300-GeV/c pi-interactions. Z.Phys., C31:21, 1986.
19. Armitage, J., Benz, P., Bobbink, G., Erne, F., Kooijman, P., et al.: Search for the newlonglived particles with masses in the range 1.4-GeV to 3.0-GeV at the CERN ISR.Nucl.Phys., B150:87, 1979.
20. Adams, T.: Searches for Long-lived Particles at the Tevatron Collider. Modern PhysicsLetters A, 23(6):371–385, 2008.
21. Wang, S. M.: Search for supersymmetry at the Tevatron. 2007.
22. Abazov, V. et al.: Search for neutral, long-lived particles decaying into two muons in pp
collisions at√
s = 1.96-TeV. Phys.Rev.Lett., 97:161802, 2006.
23. Abazov, V. M. et al.: Search for Stopped Gluinos from pp Collisions at√
s = 1.96TeV .Phys. Rev. Lett., 99:131801, Sep 2007.
154
24. Drees, M. and Tata, X.: Signals for heavy exotics at hadron colliders and supercolliders.Physics Letters B, 252(4):695–702, 1990.
25. Aaltonen, T. et al.: Search for Long-Lived Massive Charged Particles in 1.96 TeV pp
Collisions. Phys.Rev.Lett., 103:021802, 2009.
26. Abdallah et al., J.: Search for supersymmetric particles in light gravitino scenarios andsleptons NLSP. The European Physical Journal C - Particles and Fields, 27:153–172, 2003. 10.1140/epjc/s2002-01112-4.
27. Gataullin, M., Rosier, S., Xia, L., and Yang, H.: Searches for gauge-mediated SUSYbreaking topologies with the L3 detector at LEP. AIP Conf.Proc., 903:217–220,2007.
28. Asai, S., Hamaguchi, K., and Shirai, S.: Measuring lifetimes of long-lived charged massiveparticles stopped in LHC detectors. Phys.Rev.Lett., 103:141803, 2009.
29. Ratnikov, F.: Search for Stopped Gluinos in pp collisions at sqrt(s)=7 TeV at CMS.Conf.Proc., C100901:297–300, 2010.
30. Conceicao, R.: Results from the Pierre Auger Observatory. 2011.
31. Aglietta, M. et al.: Response of the Pierre Auger Observatory water Cherenkov detectorsto muons. 2005.
32. Allison, P. et al.: Observing muon decays in water Cherenkov detectors at the Pierre AugerObservatory. 2005.
33. Abraham, J. A. et al.: Operations of and Future Plans for the Pierre Auger Observatory.2009.
34. Nigra, L. M.: Cosmic ray-induced muon shower measurements using a low-cost, smallbaseline array. Master’s thesis, University of Illinois at Chicago, 2005.
35. Jackson, J. D.: Classical Electrodynamics. John Wiley & Sons, 3rd edition, 1998.
36. Leo, W. R.: Techniques for Nuclear and Particle Physics Experiments. Springer-Verlag,1987,1994.
37. Knoll, G. F.: Radiation detection and measurement. John Wiley & Sons, 2010.
155
38. Salacka, J. S. and Bacrania, M. K.: A Comprehensive Technique for Determining theIntrinsic Light Yield of Scintillators. IEEE TRANSACTIONS ON NUCLEARSCIENCE, 57(2):901–909, April 2010.
39. Jaynes, E.: Prior Probabilities. Systems Science and Cybernetics, IEEE Transactions on,4(3):227–241, sept. 1968.
40. Fisher, W.: Collie: A Confidence Level Limit Evaluator. Technical report, Fermilab D0Note 5595, 2009.
41. J. Linnemann, M. P. H. P.: Calculating Confidence Limits. Technical report, Fermilab D0Note 4491, 2004.
42. Bertram, I. et al.: A Recipe for the construction of confidence limits. 2000.
43. Parzen, E.: Modern Probability Theory and Its Applications. John Wiley & Sons, 1960.
44. James, F.: Statistical Methods in Experimental Physics. World Scientific, second edition,2006.
45. Whittaker, E. T. E. T. and Robinson, G. G.: The calculus of observations: an introductionto numerical analysis. New York : Dover Publications, 4th ed edition, 1967. ”Anunabridged and unaltered republication of the fourth edition (1944) of the workoriginally published ... in 1924.”.
46. Cannizzaro, F., Greco, G., Rizzo, S., and Sinagra, E.: Results of the measurementscarried out in order to verify the validity of the poisson-exponential distributionin radioactive decay events. The International Journal of Applied Radiation andIsotopes, 29(11):649–IN1, 1978.
47. Adams, T. et al.: Observation of an anomalous number of dimuon events in a high-energyneutrino beam. Phys.Rev.Lett., 87:041801, 2001.
48. Aguilar-Saavedra, J.: Computation of confidence intervals for Poisson processes.Comput.Phys.Commun., 130:190–203, 2000.
49. Andersen, K. K. and Klein, S. R.: High energy cosmic-ray interactions with particles fromthe Sun. Phys.Rev., D83:103519, 2011.
156
50. Blaskiewicz, M., Brennan, J., and Mernick, K.: Three-Dimensional Stochastic Cooling inthe Relativistic Heavy IonmCollider. Phys.Rev.Lett., 105:094801, 2010.
51. Byrne, M., Kolda, C., and Regan, P.: Bounds on charged, stable superpartners from cosmicray production. Phys. Rev. D, 66:075007, Oct 2002.
52. Chen, J. and Adams, T.: Heavy stable charged particle searches at the LHC. InternationalJournal of Modern Physics A, 26(20):3315–3335, 2011.
53. Cousins, R. D. and Highland, V. L.: Incorporating systematic uncertainties intoan upper limit. Nuclear Instruments and Methods in Physics Research Section A:Accelerators, Spectrometers, Detectors and Associated Equipment, 320(1–2):331–335, 1992.
54. Feldman, G. J. and Cousins, R. D.: Unified approach to the classical statistical analysis ofsmall signals. Phys. Rev. D, 57:3873–3889, Apr 1998.
55. Greisen, K.: Cosmic Ray Showers. Ann. Rev. Nucl. Sci., 10:63, 1960.
56. Nakamura, K. et al.: The Review of Particle Physics. J. Phys. G 37, 075021, 24. CosmicRays:7, 2010 and 2011 partial update for the 2012 edition. (Particle Data Group).
57. fei Yin, P. and hua Zhu, S.: Detecting light long-lived particle produced by cosmic ray.Physics Letters B, 685(2-3):128–133, 2010.