2. Simulation of Silicon Detector Response
for 69 Kr β-Delayed Proton Emission
by
Zachary Paul Meisel
THESIS
Submitted in partial fulfillment of the requirements
for the degree of Bachelors of Science of Astrophysics
in the College of Natural Science of
Michigan State University, 2010
East Lansing, Michigan
3. Simulation of Silicon Detector Response
for 69 Kr β-Delayed Proton Emission
Zachary Paul Meisel
Department of Physics and Astronomy
Michigan State University, 2010
Hendrik Schatz, Director of Thesis Research
Modern nuclear astrophysics experiments rely on simulations to ensure accurate and
efficient data interpretation. Here Monte Carlo simulations have been performed using
the simulation packages MCNPX, GEANT4, and CASINO to aid in the identification of
69
decay branches in the β-delayed proton emission of Kr. Information regarding these
68
branches will be used to determine a proton-capture Q-value for Se to determine its
impact as a waiting point in the astrophysical rapid proton-capture (rp-)process. It is well
known that waiting points in the rp-process, the dominant mechanism of nucleosynthesis
in type-I x-ray bursts, dominate many of the features of the burst’s light curves. Thus
68
information on Se will help advance our understanding of this cosmic phenomenon.
iii
5. Acknowledgments
This work was completed with support from the Physics and Astronomy department at
Michigan State University, and from the National Science Foundation grants PHY02-
16783(JINA) and PHY01-10253(NSCL). Much thanks to my research advisor Hendrik
Schatz. He has provided me with many unique opportunities to present and discuss my
research and has also connected me with a superior group of research scientists. I am also
greatly indebted to Richard Cyburt and Karl Smith, each of whom have provided much
needed mentoring over the last two years. Additional thanks are certainly due to Marcelo
del Santo, Heather Crawford, and Giuseppe Lorusso for helping me with difficulties I
encountered in creating my simulations and to Ana Becerril, Fernando Montes, and
Sebastian George for useful discussions.
v
13. Foreward
This paper is likely to have readers with various backgrounds in nuclear astrophysics,
thus some of the following chapters may be of more interest to a given reader than others.
It is the author’s belief that these readers will be of three types: the non-physicist looking
for an example of an undergraduate’s contribution to the field of nuclear astrophysics,
the undergraduate or early graduate student in physics whose task it may be to simulate
silicon detector response, and the specialist who requires information on the simulations
performed for the NSCL’s experiment 07025. Chapters 1 and 2 are aimed at the first
class of reader, though the remaining chapters should still be comprehensible. The second
class of reader may use the first two chapters as a refresher and pay more attention to
the remaining text, where the most valuable sections will likely be chapter 3 and the
appendix. The third class of reader will most likely be interested only in chapters 4 and
5 as well as the appendix. However these are only suggestions and each section should
hopefully be comprehensible and perhaps enlightening to the majority of readers.
xiii
15. Chapter 1
Introduction to the Study of
Nuclear Reactions
Nuclear physics strives to understand the nuclei of the elements that make up our uni-
verse. The study of nuclear reactions strives to understand how these nuclei interact.
A convenient context in which we speak of nuclei is the chart of nuclides, also known
as a Segr` chart, which is shown in Figure 1.1[1] . The x-axis denotes neutron number
e
N , the y-axis denotes proton number Z, and the combination of these two gives us the
atomic mass A = Z + N . Using these numbers we refer to a nucleus with a given symbol
S with the notation A SN . For example, the ultimate nucleus of interest in this study,
Z
Selenium-68 (A = 68, Z = 34, N = 34), is denoted by 68 Se34 .
34
To describe nuclear reactions, or interactions between nuclei and leptons, we incorpo-
rate our notation for individual nuclei. In a nuclear reaction, the participating particles
that interact are the reactants and the resultant particles after the reaction are the
products. A convenient way to write the reaction is
reactants −→ products.
For this study we’ll deal with one or two reactants and two or three products, so instead
the reaction looks something like
A + B −→ C + D.
Then, to save space, we condense this notation by placing the heavier reactant and the
heavier product, i.e. the nuclei, on the outside of a set of parentheses and place the
lighter reactant and product on the inside of these parentheses, separated by a comma.
So, our fictitious equation above now looks like, A(B, C)D. Often times we will refer to
the outside reactant as the target and the inside reactant as the beam, but in the context
1
16. Figure 1.1: Chart of nuclides
of a stellar environment it does not really matter which reactant gets which label. In
the physical process that occurs in an astrophysical environment, all that matters is that
the reactants interact to produce the products.
The main factors which control whether or not a nuclear reaction will occur are
the temperature of the environment and the density of reactants in the environment.
A multitude of nuclear reactions are often possible in similar conditions, meaning that
they can occur simultaneously. Thus, we could imagine a situation where our fictitious
reactants A and B can interact along side product C interacting with some other reactant
E.
A + B −→ C + D
C + E −→ F + G
It doesn’t take much of an extension of this idea to realize that whole networks can
form, depleting nuclei of lower mass to build nuclei of higher mass, absorbing and re-
leasing energy along the way. Many such networks exist in astrophysical environments.
The reaction network of interest in this study is the rapid proton-capture (rp-)process.
In the rp-process successively heavier nuclei capture protons (p), releasing energy in
the form of photons (γ). Whenever a nucleus is created that deviates too much from
stability (N = Z for light nuclei), indicated by the black squares in Figure 1.1, this
nucleus undergoes β + decay, where one of its protons changes to a neutron, a positron
(e+ ), and an electron neutrino (νe ). One such piece of the rp-reaction chain looks like [33]
2
17. 34
Cl + p −→ 35 Ar + γ
35
Ar + p −→ 36 K + γ
36
K + p −→ 37 Ca + γ
37
Ca −→ 37 K + e+ + νe .
If the above proton capture reactions are faster than decays on the respective nuclei then
the process can create so called neutron deficient (a.k.a. proton-rich nuclei). Hence the
name “rapid” proton-capture process.
1.1 The Study of Proton-Rich Nuclei
When classifying nuclei each falls into one of three general classes: stable, proton-rich, or
neutron-rich. As you might expect, proton-rich nuclei have more protons than a stable
nucleus of the same mass A and neutron-rich nuclei have more neutrons than a stable
nucleus of mass A. This begs the question, what makes a nucleus stable?
While the nature of stability is much too complicated to be discussed here, it is
worthwhile to briefly consider the simple model for a nucleus. In this model protons
and neutrons, generically known as nucleons, are bound together via the strong force,
which is roughly 100 times stronger than the Coulomb force that causes protons to repel
each other. The bound nucleons must obey the Pauli principle. That is, since they
have half-integer quantum spin, they cannot simultaneously occupy the same spatial and
spin state[3] . Then if space were at a premium, as in a nucleus, it makes sense that it
would be highly efficient, i.e. energetically favorable, to pair protons and neutrons. This
pairing is observed as a tendency for the line of nuclear stability to stay near N = Z
[4]
. Keeping in mind that charged protons repel each other due to the Coulomb force,
it would be expected that at large A more neutrons are added. As a consequence of
the previously stated criterion for a bound nucleus, there are far fewer proton-rich nuclei
than neutron-rich nuclei. But this does not mean proton-rich nuclei are not interesting.
Proton-rich nuclei are the nuclei which are created in explosive hydrogen burning
events, as will be explained in Chapter 2. Hence, it is these nuclei that we must study if
we are to understand the astrophysical events in which explosive hydrogen burning occurs.
While many proton-rich nuclei are created and participate in the rp-process, it would be
incredibly time consuming to study them all. So, given guidance from theoretical nuclear
astrophysicists, experimental nuclear astrophysicists set out to measure the nuclei of
particular importance.
More information will be given in Chapter 2 as to which nuclei are important in the
rp-process, but for the moment we can just refer to these as waiting points. Waiting
points, as the name indicates, are nuclei that delay the flow of the rp-process. This
study will contribute information to the task of finding out how slow one of these waiting
points, that of 68 Se, is. Though there are theoretical predictions, experiment is needed
3
18. 69
Figure 1.2: Proposed level scheme Br.
to verify or falsify the theory. The experiment which this study simulates will add one
piece of the required experimental input.
69
1.2 The Study of Kr
69
Kr (Kr is the symbol for Krypton) is not itself a waiting point nucleus for the rp-
process, but it is key in studying the proton capture of 68 Se. It might then be surprising
that a quick reference of the chart of nuclides, or the periodic table, reveals that the
nucleus one unit of Z greater than Selenium’s Z = 34 is Bromine (Br) with Z = 35,
and not Krypton (Z = 36). However, recalling that proton capture is a process which
transforms nuclei as A SN −→ A+1 TN , it becomes apparent that proton-capture on 68 Se
Z Z+1
creates the (likely) proton-unbound 69 Br[5] . Note that proton capture by 68 Se is still of
interest because reaction rate timescales are short enough in the explosive astrophysical
environments in which the rp-process occurs to allow 69 Br to capture a proton before it
expels the initially captured proton.
Being that 69 Br is too short-lived to allow for study in the lab[6] , we instead exploit
a fortuitous characteristic of 69 Kr. To the advantage of experimental nuclear astrophysi-
cists, 69 Kr undergoes β + -decay to become 69 Br. The decay product is so short-lived, that
the entire decay chain, 69 Kr−→ e+ + νe + 69 Br−→ 68 Se+p is referred to as the β-delayed
proton emission of 69 Kr. In a way that will be outlined in Chapter 5, the light charged
products of this decay chain, e+ and p, are detected with relative ease.
When 69 Kr undergoes β + -decay, it does not necessarily populate the ground state of
69
Br. The “state” refers to the total energy of nucleons within the nucleus. A higher
state has more energy. The binding energy, or energy per nucleon, varies from nucleus to
nucleus. We can determine this energy from a level scheme and, consequently, determine
possible relevant decays. One possible level scheme for 69 Br is shown in Figure 1.2 [7] .
Here the arrows from 69 Kr to levels in 69 Br represent positron emissions of different
4
19. energies and the arrows from 69 Br levels to 68 Se represent proton emissions of different
energies. It should be noted that of the shown 69 Br levels, only that labeled IAS is
confirmed to exist[5] .
In the experiment which this study (described in more detail in Chapter 5) simulates,
the goal is to identify other levels in 69 Br. Due to experimental limitations, Xu et al.
were only able to identify β-delayed proton emission involving 69 Br’s isobaric analog state
(IAS). As the study was forced to look at a small range in proton energy, they chose to
look at energies that corresponded to the IAS. This is because the majority of β-decays
tend to populate the IAS due to its similar level structure [8] , and indeed Xu et al.
were able to determine 83% of 69 Kr decays populated the IAS. Therefore, other β-decay
branches, and consequently other proton emission branches, can be expected to occur in
the β-delayed proton emission of 69 Kr.
Why not be satisfied with the known decay branch, since it is so dominant? This is
answered simply by relating the excitation energy E of the IAS to its respective tempera-
ture T using the well known relation E ∼ kT , where k is Boltzmann’s constant. Inserting
the IAS energy of 4.07MeV (MeV = 1 million electron-volts) provides the rough astro-
physical temperature necessary to populate this state: T ∼ 1 × 1010 Kelvin. This over 10
times the typical temperature associated with expected rp-process sites[9] , as shown in
Chapter 2. Given the previously stated information about the levels in 69 Br it is appar-
ent that the levels of interest will be at energies below the IAS and that these levels will
scarcely be populated. More details on the experimental set-up are given in Chapter 5.
1.3 Importance of Simulations
If we could isolate a single 69 Kr nucleus, observe it decay to 69 Br, and then collect
the proton with our detector system, and repeat this process many times over, then
identifying new proton branches would be somewhat trivial. However this simple picture
is far from reality. The first detail to be considered is the time between 69 Kr decaying
to 69 Br and 69 Br emitting a proton. The second concerns the process by which 69 Kr is
produced and delivered to the detector system.
As was stated in the previous subsection, 69 Br is proton-unbound. This means that,
upon coming into existence, it almost immediately expels a proton. The only information
on the lifetime of 69 Br is its non-observation, so that its half-life has an upper limit of
t 1 < 24 nanoseconds[10] . This means that a positron is emitted and detected in our
2
detector system and, on average, less than 24ns later a proton is emitted and detected.
However, the time for the electronics to process information on particles detected so
that the information (e.g. the energy they deposited) can be recorded is of the order of
microseconds[11] . So we must instead gather the information on the positron and the
proton at the same time. This leads to a summing effect that effectively shifts the proton
5
20. Figure 1.3: NSCL Cyclotrons and A1900 Fragment Separator.
energy peak to higher energies. In order to correct for the β summing effect, simulations
are required to evaluate its impact. Once the shift in the energy distribution peak is
determined, we will be able to more accurately determine the energy of protons emitted
in various decay branches.
Further complications arise due to the 69 Kr production process. Since 69 Kr has a half-
life of 32ms[10] , it must be produced just prior to implantation in the detector system.
The process in which it is produced is called fragmentation. In fragmentation a heavy
isotope, here 72 Kr, is accelerated as the primary beam and collided into a production
target, here Beryllium. These collisions produce many different kinds of isotopes, occa-
sionally producing our nucleus of interest. A series of magnets, here the A1900 Fragment
Separator (See Figure 1.3[30] ) then separates out the nucleus of interest, but often the
process is imperfect and other nuclei make it to the detector system. The simulation
will only simulate the processes of interest, i.e. β-delayed proton emission, thus it should
help in discerning relevant data from the whole.
One may wonder how it can be known that simulations are accurately reproducing
the physical conditions of the experiment. This subject is discussed some in Chapter
3 and at length in Chapter 4. The main simulation of this project and the experiment
it simulates are detailed in Chapter 5. How this simulation will be used to aid in the
interpretation of the experimental results follows this in Chapter 6.
6
23. Chapter 2
Astrophysical Motivation
Since ancient times humans have been intrigued by the lights that illuminate their night’s
sky. For those observing before the invention of telescope, these objects were simply
unchanging points that were static with respect to each other. Once in a very long while
a new light would appear, often shining throughout the day, and then slowly fade out of
existence. The Crab Nebula (see Figure 2.1[15] ) is a well known example of one of these
transient lights in the night sky that the Chinese observed in 1054AD. These occurrences
offered the first clues that these lights were much more than decorations on the ceiling
of the celestial sphere.
With the invention of the telescope astronomers were able to analyze the lights of
our night sky more detail. The introduction of the fields of spectroscopy, the study of
light’s interaction with matter as a function of wavelength, and spectrometry, the study
of the creation and absorption of light due to atomic and nuclear structure, empowered
astronomers with the quantitative capabilities they have today. Applying these two
disciplines, Suess and Urey [16] were able to determine the relative abundances of the
nuclei in our solar system. One year later Burbidge, Burbidge, Fowler, and Hoyle[17] ,
and separately Cameron[18] , used this information and that from the then 20 year old
field of nuclear physics to propose how all of the nuclei in the universe were synthesized.
Thus nuclear astrophysicists were provided one of the major foundations of the field.
Nuclear astrophysics studies the synthesis of elements in stars and stellar environ-
ments and their dispersion into the interstellar medium. It provides us with unique
insight into the building blocks of nature, allowing us to study nuclei in environments
that are only marginally reproducible on earth. Volumes could be (and have been) filled
on the many nucleosynthesis sites extant in our universe, but the remaining discussion
will focus on the main site that is pertinent to this study. This is the astrophysical site
known as a type-I x-ray burst.
9
24. Figure 2.1: The Crab Nebula as seen from the Hubble Space Telescope.
2.1 X-Ray Bursts
Type-I x-ray bursts are frequently recurring thermonuclear (driven by temperature de-
pendent nuclear reactions) explosions on the surface of an accreting neutron star’s crust[19] .
They were first observed in the early to mid-1970s [20][21] , characterized by a steady peak
flux of light in the x-ray region (0.01nm ≤ λ ≤ 10nm) with an occasional sharp rise in
luminosity followed by an exponential decay. Astrophysics theorist produced sugges-
tions as to their underlying cause shortly after a multitude of these observations were
published[22][23][24] . The essence of these models involves a binary star system in which
a neutron star and a main sequence or giant star revolve around each other, as in Figure
2.3.
A neutron star is an extremely dense body, mostly composed of neutrons, left behind
from a massive star’s core-collapse explosion, known a type-II supernova. Main sequence
stars synthesize helium from hydrogen in their cores, but are mostly hydrogen. Giant
stars have evolved off the main sequence, having burned through most of their core
hydrogen, and have an outer envelope made mostly of hydrogen and helium. All stars
begin as main sequence stars and evolve into giant stars. Those that are roughly 8 times
the mass of the sun (8M⊙ ) and above eventually become type-II supernova. The main
paths taken in stellar evolution are shown in Figure 2.2[25] . For a typical x-ray bursting
system the neutron star is a roughly 1.4M⊙ supernova remnant and its companion is a
giant star of order 1M⊙ or less.
In an x-ray binary system giant star is transferring gas to the neutron star in a process
called “accretion”. Mass transfer is possible because the giant star has expanded so that
10
25. Figure 2.2: Stellar evolution possibilities.
its gas fills its Roche lobe while overlapping the neutron star’s Roche lobe. The Roche
lobe is essentially sphere in which a star’s mass is gravitationally bound [26] . Once mass
transfer is initiated gas flows freely from the surface of the giant star to the surface of the
neutron star, as shown in Figure 2.3[27] . As a result of accretion, gas rich in hydrogen
and helium builds up on the crust of the neutron star. This results in the emission of
x-rays due to the gravitational energy released during accretion in all of these systems
and nearly half also undergo in type-I x-ray bursts [28] .
Due to the system’s steady emission of light which peaks in the x-ray portion of
the spectrum, astrophysicists can infer the temperature of the environment via the well
known relations
hc
E= ∼ kT
λ
(4.1 × 10−21 M eV ∗ s)(3. × 108 m/s)
T ∼ −9 −11 ∼ 1. × 108 K,
(0.1 × 10 m)(8.6 × 10 M eV /K)
where k is Boltzmann’s constant, h is Planck’s constant, and c is the speed of light. A
6 6
more realistic approach utilizes Wien’s displacement law, T = 2.9×10λnm∗K = 2.9×10 nm∗K =
0.1nm
2.9 × 107 K. While this allows us to calculate the temperature at the surface of the burst-
ing site, this temperature must be related to the temperature of the bursting zone via
models. In general models infer the thickness of a burning layer by calculating how many
nuclei, each releasing roughly 5MeV of energy in a given fusion reaction, it would take
to create the observed energy of ∼1038 erg . Observational indications that the site is a
sec
11
26. Figure 2.3: Artist’s conception of an accreting binary system.
neutron star, which are beyond the scope of this paper, can be included to arrive at the
general conditions for an x-ray burst that can be included in simulations.
Why study x-ray bursts? From a statistical point of view x-ray bursts are unique
astronomical objects in that over 1,000 have been observed from over 40 separate sites
[29]
. We are thus able to see how bursts vary between binary systems and how the bursts
vary within a single system’s recorded bursting history. This allows for the identification
of common features and dependencies of the bursts on things such as the giant star’s
mass, the giant star’s composition, and the rate of accretion. In astronomy such a robust
data set is very rare, meaning that x-ray bursts are invaluable sources of information.
X-ray bursts have the potential to be a wealth of physics information, though much
work must still be put into understanding them before that wealth can be exploited. Of
particular interest is the nuclear physics of x-ray bursts that provides input regarding
the neutron star’s radius and its crustal composition. Each of these allows us to learn
about the neutron star structure, which provides information about the equation of state
of dense nuclear matter and consequently about the strong nuclear force [30] . Observing
the effects gravitational redshift has on the thermal emission spectrum of matter being
accreted onto the neutron star provides information on its mass to radius ratio and
studying the final abundances of nuclei produced by the rp-process indicates what the
crustal composition neutron star would be[19] . However, we will not be able to extract
these parameters with the necessary level of confidence until we have improved our models
of x-ray bursts[31] , which are very sensitive to nuclear physics data [32] . Currently one
of the main ways to determine which nuclear physics data is of particular importance in
x-ray bursts is to determine which data are important in the rp-process.
12
27. 2.2 RP-Process and Its Waiting Points
The rp-process is a mechanism of nucleosynthesis in which protons are captured on nuclei
to create successively heavier proton-rich nuclei. Proton capture can be stalled when a
nucleus is reached whose proton capture rate is prohibitively small so that it must undergo
β + -decay for the process to continue[33] . The nuclei that cause this occasional stalling
are especially interesting because differences in the time the rp-process waits there can
cause a large difference in the final abundance of nuclei produced [19] . These nuclei
are called “waiting points”. Nuclear physics data combined with nuclear astrophysical
models allow us to determine which nuclei are waiting points.
Before the rp-process is initiated, the x-ray burst begins due to thermonuclear run-
away, a process in which a reaction that is highly sensitive to temperature releases energy,
increasing the temperature, thereby increasing the reaction rate and providing positive
feedback [26] . Here runaway is triggered due to the highly temperature sensitive triple-
alpha reaction, which ultimately synthesizes 12 C from three 4 He (a.k.a. α). Temperatures
then rise to initiate α-capture which then provides the energy and seed nuclei necessary
to initiate hydrogen burning for the rp-process [34] . The exact path of the rp-process
is highly dependent on temperature and density, particularly regarding capture on light
nuclei due to competition with α-capture induced reactions, but above calcium the rp-
process is only determined by proton captures and β-decays[33] . For nuclei in this region
along the proton drip line, the rp-process is dominated by β-decay lifetime of waiting
point nuclei [33] . Thus if the process were able to bypass a waiting point nucleus via a
proton capture, the path of the rp-process, and consequently the light curve and final
composition of the x-ray burst, could be significantly altered[35] .
Of the high A nuclei on the rp-process, one that has been identified as a waiting
point, from β-decay measurements, is 68 Se. However, uncertainties remain in calculating
the proton capture reaction rate, so it is possible that successive capture of two protons
could allow the rp-process to bypass 68 Se [19] . The issue then is to determine these proton
capture rates on earth.
When studying nuclei that are proton-rich we have seen that are short-lived. Being
that they do not maintain their current proton-rich state for very long, it is apparent that
adding a proton to one of these nuclei is not a simple task. So, to circumvent this issue,
we instead study the opposite process. Here this means we study the proton emission of
69
Be instead of the proton capture by 68 Se. Though the rates for these processes are far
from equivalent, the nuclear structure is the same. As the mass difference, given from
nuclear structure, between 69 Br and 68 Se is the most important variable in determining
the proton-capture Q-value, we attempt to determine 69 Br’s ground state mass. We de-
termine this mass by detecting the energy of protons resulting from 69 Br proton emission,
which we study by necessity via the β-delayed proton emission of 69 Kr. The method of
determining this structure is detailed in Chapter 5.
13
29. Chapter 3
Simulation Packages
The physics of charged particle interactions in detector systems involves many processes
that are highly sensitive to incident energies, occur on small timescales, and involve small
spatial scales. As a result, modeling this physics requires a simulation code that is able
to take into account large and varied data and implement this data with the smallest
possible spatial and temporal resolution. While creating a personal simulation package
for such physics is not out of the question, the process would require years of effort to
ensure a properly working code. Thus, to avoid reinventing the wheel, it is often more
practical to employ previously tested and developed simulation packages.
The simulation packages used here were Monte Carlo N-Particle X (MCNPX)[36] ,
Geometry and Tracking 4 (GEANT)[37] , and Monte Carlo Simulation of Electron Tra-
jectory in Solids (CASINO)[38] . Each of these packages is the result of over a decade of
development and testing. Also, each is relatively easy to acquire, though MCNPX takes
some extra effort. GEANT4 was the primary simulation package used and MCNPX and
CASINO were used to verify its results. The following sections will attempt to briefly
explain the uses and methods of simulation for each package.
3.1 MCNPX
MCNPX is a simulation package developed by Los Alamos National Laboratory which
has major updates roughly every 3 years [36] . The version used in this study was MCNPX
2.6.0 (package ID:C00740MNYCP02), originally released in April 2008. A copy of this
software can be obtained by contacting the Radiation Safety Information Computational
Center (RSICC)[39] . However, as MCNPX is a product of the United States Department
of Energy, it requires paperwork to be submitted and it is advisable that contact with
RSICC is initiated by a laboratory or university software representative. The entire
process can take anywhere between two weeks and one month before the software is in
hand.
15
30. Figure 3.1: A minimalistic MCNPX simulation input file.
Additionally, it must be noted that once the MCNPX package has been approved
for use, the source code is likely to be unavailable for viewing. A user’s manual that
describes every aspect of the physics and algorithms employed in the code is provided in
addition to the simulation package. It is generally sufficient in describing what the code
can do and how one can make the code perform specific tasks. It is important to note
that this manual cannot be shown to anyone who does not also have permission to use
MCNPX from RSICC under penalty of imprisonment. However, input and output of the
simulations are free to share amongst research colleagues.
Regarding input, the simulations in MCNPX are coded in a unique, single-document
environment separated by an empty line into three sections: cell, surface, and data. The
input in each section generally devotes one line of code to specify a given aspect of the
simulation, where each line is referred to as a “card”. Thus a simulation is coded card
by card, where the simplest of simulations could contain as few as 10 cards (A very
simple example is shown in Figure 3.1). In the author’s opinion, a marked advantage
here is simplicity, however the accompanying disadvantage is a general obfuscation of the
simulation’s inner workings.
16
31. As it is easiest to describe the input cards with an example, Figure 3.1 will be de-
scribed beginning at the first line and continuing to the bottom, with some mention of
cards and options for cards that are not shown. While this example fills only 26 lines,
including comments, some simulations in this study were over four times this length, un-
derlining the importance of ample comments (See Appendix). The example shown fires
a neutron 1 × 105 times with an initial direction selected from an isotropic distribution
inside the center of a sphere of Oxygen located near a sphere of Iron, all inside a cube of
Carbon, which itself is located inside an infinite vacuum. Data is only recorded regarding
the flux of the neutrons and distance traveled by neutrons within the iron sphere.
A given cell card, of which there are four, lists the assigned cell number, number
assigned to a desired material that is defined below, the density of said material, and the
volume of the cell as defined by the surface cards. Negative signs for density indicate units
g
of cm3 and negative signs for cell surfaces indicate a cell is located within that surface.
g
Thus cell one is Oxygen with a density of 0.0014 cm3 bounded within the spherical surface
g
number seven. Similarly cell two is Iron of density 7.86 cm3 bounded by the spherical
g
surface eight. Cell three is Carbon of density 1.6 cm3 bounded within the cube defined
by planar surfaces one through six. Finally, cell four is the vacuum that surrounds cell
three. Much more complicated cell volumes could be defined, however these will not
be described here as this work consisted solely of rectangular, spherical, and cylindrical
cells.
Surface cards, as is evident in the preceding paragraph, describe surfaces that can
be used to construct cells. Unused surfaces will register a warning upon running the
simulation. Surface cards list the assigned surface number, the area type with a label
defined in the MCNPX user manual, and a sequence of numbers describing coordinates
within the entire volume required to fully define said surface, also detailed in the user
manual. Surface one is a plane with its normal oriented along the z-axis, as indicated
by “PZ”, at z = −5cm. Here one may be bothered by the introduction of arbitrary
axes, but it is necessary that one make a choice of orientation with the beginning surface
cards and stick to that convention to assure consistency in defining cell cards. Surfaces
two through six are defined in a similar manner to surface one. As a general note, one
could instead define the surfaces of a rectangular prism in one card. Surface seven is a
sphere, as indicated by “S” with a centroid located at (x,y,z)=(0.cm,−4.0cm,−2.5cm)
and a radius r = 0.5cm. Surface eight is similarly defined but with centroid coordinates
(0.cm,4.cm,4.5cm). As with the cell cards, much more complicated surfaces are possible,
where many examples of these are given in the MCNPX user manual.
Whereas the previous cards describe the geometry of the simulation, the data cards
primarily describe the physics input and output. The importance card, denoted by “IMP”
specifies what particle type, here “N” for neutron, to track in which cells. By default
MCNPX includes all known physical processes relevant to the tracked particle. Physics
relevant to other particle types could be additionally specified by using the “PHY:” card
17
32. and its respective options, followed by the symbol for the desired particle, on the line
beneath the “IMP” card. Cell importances are assigned in the order they are defined
with a 1 indicating important and a 0 indicating not important. Note that if the particle
of interest must traverse a cell that may be of no interest in order to get to one that is
of interest, the traversed cell must still be assigned a 1 so that the particle “makes it” to
the interesting cell.
“SDEF” is the source definition card that specifies all of the characteristics of the
simulated source. The source shown here is as simple as it could possible be. In general
this card will be used with many more options which additionally specify things such as
initial direction, initial energy, and particle type. Directions and energies can be selected
from self-defined or predefined functional or discrete distributions. The source here is
taken to b a neutron by default since it is the only particle assigned any importance
by the “IMP” card, located at the coordinate (0.cm,−4.0cm,−2.5cm). As an additional
check, one can specify the cell of the origin to ensure the chosen position is correct, or
nearly so. If left undefined, the default directional distribution is isotropic and the default
initial energy is particle specific and defined in the MCNPX user manual.
The following sets of cards, beginning with “F” and “E” always appear in conjunction
and are referred to as “tallies”. The cards with “F” followed by some number specify the
physics of the source particle to be recorded for output. The letter-number combinations
that correspond to given data are listed in the MCNPX user manual. Following “F#:” is
the particle type and the cell(s) for which to record this data, as is evident in Figure 3.1.
The “E” card indicates how to bin the data that is specified to record, with the number
following “E” denoting which type of recorded data it is binning and the units of said
binning (usually energy in MeV) being defined for each data type in the MCNPX user
manual. If a given data type as already been binned, the number listed is increased by
an increment of ten from the previous “E” card that pertains to this data type (e.g. for
F2 one could have cards E2, E12, E22, etc). If one wishes to use the same binning for
all “F” cards then the “E0” card is used.
The “M” cards specify the materials used to fill various cells in the simulation. If a
material is specified, but not used a warning will be issued upon running the simulation.
The card contains “M” followed by an assigned material number, then the isotope(s)
the material is composed, followed by the relative amount of that isotope in the given
material. The elements are specified with six or less numbers were the first three denote
the element and the last three the atomic mass A. If the numbers for A are all zero, the
average mass of a particular element on earth is used. The total amount of isotopes in a
given material should add up to some multiple of 10, so as to mimic percentages, however
if they do not the simulation will normalize the total fractional amount to one and issue
a warning. Thus material one is fully 16 O, two is mostly 56 Fe with some 54,57,58 Fe, and
three is mostly 12 C with some 13 C. Finally the “NPS” card indicates how many times
the Monte Carlo simulation will run. Recall that much more complex data cards are
18
33. possible, however one should consult the MCNPX user manual for such cards.
To produce output, one must first source the MCNPX package software with the
command “source /filepath/”. The simulation is then run with the command “mcnpx
i=InputFilename o=OutputFilename. Additional commands can be appended to this
line for a more customized output. Of particular use in the debugging phases is the
option “PRINT”, however this should not be used for long (> 1e4) simulations as it
creates a large output and consequently slows runtime. Visual output, also invaluable in
debugging, can be created by loading input files into the VISED[40] software, however
this too must be obtained from RSICC.
The output of the MCNPX simulation contains many pieces of information, most
are of little interest here, so an attempt will be made to briefly highlight important
output quantities. The basic structure of the output is as follows: restate the input,
describe the input geometry, describe the specified source, list physical characteristics
of initial group of Monte Carlo simulations, list results of “F” tallies as binned by “E”
tallies, list statistical qualities of recorded tallies, and list the total simulation time in
human units (e.g. actual minutes). A sample output is not pictured here due to its
excessive length, e.g. 300 lines for the simple input shown. As was stated,a more verbose
output can be printed, and should be during debugging phases, using the “PRINT”
option. Additionally, warnings will be listed after the listed physics input or output that
MCNPX developers think generally require special attention. In practice the output
initially requires a close reading with the MCNPX manual in hand, so no effort will be
made to describe it any further. Output from simulations performed in this study is not
listed in the appendix due to size, however the author can be contacted if one wishes to
consult a full output file of a simulation presented.
3.2 GEANT4
GEANT4 is a simulation package developed by users of the CERN facility[41],[42] and is
maintained by the its user community. The installation used here was version 9.1.0, which
was released in January 2008. GEANT4 software is freely available online, provided one
is able to download a roughly 0.5Gigabyte file. As the code is open source, one is freely
available to inspect and alter source code with the caution that the software has been
developed and inspected by hundreds of professionals.
GEANT4 simulations are coded modularly in a C++ style language, where hundreds
of pre-made classes contained in the original software are available for use. General
knowledge of C++ syntax is not required to create GEANT4 simulations, but it is highly
recommended. An attempt will be made here to describe the general make-up of a simple
GEANT4 simulation, however due to the modularity of the code the following description
will certainly lack the clarity of that given in the section on MCNPX. All codes described
19
34. will be located in the appendix.
Regarding general modules of the simulation, a main code is run that contains ref-
erences to the header files of the primary components of the simulation. These primary
components are main geometry, materials used, physics processes included, method in
which a single simulation iteration is generated, and method in which desired data is
recorded. Each header file contains definitions of included classes as well as references to
minor components of the simulation that specify things such as an individual detector’s
geometry or a special method of tracking data in a particular component of the simu-
lation. Header files have a corresponding source file which contains instances of classes
defined in the headers. Generally the main code exists in a directory above two sepa-
rate directories, include which has the header files and src which has the source files.
Throughout the code references are often made to predefined classes that contain infor-
mation such as a method of specifying a given geometry or an algorithm to sample a
Gaussian distribution. Other than this, no pieces of physics, input, or output are pro-
vided for the user. As such, the author advises that one begin by working by inspecting
and imitating working examples, using references such as the GEANT4 User Support
[43]
and the doxygen GEANT4 documentation [44] .
The main code is the driver for all other codes. It initializes the conditions of the
simulation by calling the main modules of the simulation. The driver constructs the
simulation volume, initializes the method of outputting the results, initializes the physics
to be included, initialize the visualization method, initialize the method of generating
a given simulation event, and finally the simulation event is run. The simulation is
compiled such that it handles a single event. The compiled code is iterated over for a
user-specified number of events by a compact macro that will be described after main
code components.
Construction of the volumes within the simulation is generally done by a code called
DetectorConstruction, or something similar to this. Within this code each physical com-
ponent of the desired simulation is constructed and oriented within an arbitrarily defined
“world volume”. It is generally more convenient to reference the components of the sys-
tem via separate modules so that one could swap, say, a cylinder made of aluminum
with a rectangle made of lead, by changing a few lines of code. For each component
constructed, a material is specified to fill the volume, the volume itself is specified via
some shape and central coordinates, and an associated “messenger” class is called. A
messenger class is necessary for any volume through which the particle of interest might
pass on its way to a volume of interest, much like the “IMP” card in MCNPX. Ad-
ditionally, for volumes that are detectors which will ultimately “detect” your particle,
as the double-sided silicon strip detector in this study, the volume must be specified as
“sensitive”. Sensitive volumes require an additional code that specifies what and how to
track in said volume.
The method of recording and outputting of results is varied and very loosely defined in
20
35. GEANT4 documentation. The results code must interface with the code that generates
the initial source particles as well as the code for the sensitive detector. (The code used
here, Results, was based on work by Ron Fox.) In this code a hexadecimal value is
assigned to variables that indicate the type and initial energy of source particles and the
type and energy of particles that impact a sensitive detector. Prior to printing the string
of assigned variables for an event to an output file, a variable is written indicating the
beginning of an event and, after the string of variables, a variable is written indicating
the end of an event. This data is sorted into a ROOT readable format by a code that will
be described after the description of the macro that runs the simulation. Note that one
does not necessarily need to output results in the same manner as described here, but
some method must be employed if one is to go beyond simply visualizing the simulation
results.
The physical processes to be included in a simulation are contained in the code called
something like PhysicsList. Here the processes in which a source particle can interact
with a detector system are individually listed for each potential particle of interest. For
certain particle types, like the electron, special “low energy” (< 1MeV) processes can be
employed. The processes are assigned an order to be evaluated and some are executed
only once a particle has dropped below a given kinetic energy. For example, when simu-
lating a positron in this study, included are scattering, ionization, bremsstrahlung, and,
only once the positron is “at rest”, annihilation. During a single event of a simulation the
particle will move along in steps with the direction of motion being decided in a proba-
bilistic (but physical) manner and whose length are specified in GEANT4 documentation
(but are changeable). Certain types of processes, e.g. scattering and ionization, can hap-
pen at substeps of a given step, while others, e.g. bremsstrahlung, only occur after a
step. Additionally, one can specify here at what energy to effectively stop a given par-
ticle or simply choose to accept default values. (Here default values were used because
a positron is stopped at 1keV, however the detector thresholds in the actual experiment
are no lower than 70keV.)
As with simulation results, the method of visualization varies widely amongst GEANT4
codes. Here the package VRMLview[45] version 1.0 was used. This package is not nec-
essarily recommended, particularly as it is from 1997, however it was used in this study
because it was available. Regardless of the package employed, a code generally named
VisManager initializes the graphics system and allows it to communicate with the simu-
lation as it runs. Creation of a visualization can be turned on or off in the compact macro
that runs the simulation. It is advisable to not create visualizations for simulations of
more than 1,000 events unless a computer with considerable processor power is employed.
Source particles are emitted (“fired”) by a code generally named something like Pri-
maryGeneratorAction. This code specifies all of the characteristics of the source. Here a
source can be as simple as a monoenergetic electron fired in a single direction, or it can
be made to fully replicate an actual radiation source. If the latter is chosen, one must
21
36. code all source particles, energies, and their respective probabilities of emission. When
choosing this option it is wise to consult the National Nuclear Data Center (NNDC)[46]
and to be sure to include only decay branches that have a statistically significant proba-
bility of occurring during the total number of simulations run. Regardless of the source
chosen, one can specify the initial position of the source as well as the direction in which
to fire the source particle, where each could be chosen probabilistically.
To compile the code, one must first source the GEANT4 library with the command
“source env.sh”. evn.sh and env.csh are two files which exist in the same directory
as the driver. These files contain information on how to compile the GEANT4 code.
Before the code is executed one must ensure that their .bashrc file contains the command
“export G4WORKDIR=/filepath.” One then runs “make clean”, “make”, and finally
“./ExecutableName”. At this point a compact code for a single simulation as been created
which typically shares the name of the executable file, but lacks the file extension.
The full simulation is finally run using a compact macro, here my vis.mac. This is
a short code, typically 10s of lines long, in which it is specified how verbose the output
of the simulation should be, whether or not to create a visualization, and how many
simulation events to perform. The full Monte Carlo simulation is then run with the
command “./ExecutableName CompactMacro OutputFile”. The contents of the output
file are specified by the results code.
For analysis it is desirable to convert the output into a ROOT[47] -readable format.
(There are likely many ways to do this, however the author simply followed an example
created by Ron Fox.) This code, generically called something like Sort2Root, initializes
a ROOT Tree, its Branches, and their Leafs to bin the data from the OutputFile into
histograms that can be used in analysis. Additionally, this code can be used to apply
detector-like resolution by effectively smearing out data bins with some desired distribu-
tion. Here this was done by taking each event energy as the centroid of a Gaussian and
using the weighted probability of the Gaussian to select a new energy, finally putting the
event into the corresponding new energy bin. Prior to writing such a code it is advisable
that one gain some familiarity with the ROOT software. Examples of all of the previously
described code will be included in the appendix as they looked for the final performed
simulations. Source code for simpler versions are also available if one wishes to contact
the author.
3.3 CASINO
CASINO is software created to model the trajectory of electrons in solids, particularly for
situations involving Scanning Electron Microscopoes (SEM)[48] . The code was developed
by the research teams at the Universit´ de Sherbrooke[38] and is freely available for
e
download online, provided one register for permission on their website. The full source
22
37. code is available for download, however only the executable graphical user interface (GUI)
was used in this study.
A simulation in CASINO consists of firing an electron given some initial positional
distribution with some initial angular distribution directly into a volume of specified size
and composition. Results from the simulation are given in graphical form and include
information such as the energy of backscattered electrons, depth to which electrons pen-
etrated, paths electrons followed, and x-rays emitted due to electron interactions. In
studies such as the one performed, CASINO is useful for verifying simulation results by
checking quantities such as the required thickness of a given material to stop an electron
of a given energy.
One begins creating a simulation by specifying the material composition and thickness
through which the electron is to be fired. Multiple layers can be created, however here a
single layer of Silicon was used. The range of electron energies, the positional and angular
distributions of fired electrons, as well as the total number of electron firing events to
simulate are then specified. One then chooses which data to output in graphical form.
Next the physics models for different electron energy loss processes and random number
generator are chosen. Finally one chooses how many electron trajectories to display and
the simulation is started. The entire process is quite simple to learn and, in the author’s
opinion, this simplicity justifies CASINO’s use though its output is limited.
23
39. Chapter 4
Verification and Validation of
Simulations
In order to properly interpret the results of any experiment in nuclear physics, it is often
necessary to have an accompanying simulation. Simulations provide insight into the
involved physical processes and they provide a laboratory in which one can freely change
experimental conditions and visualized their impact on the results. Here the simulation
is required to correct the observed particle energy for β-summing to extract the correct
proton energy. However, before a simulation can be used it must be extensively tested to
ensure it provides results that replicate the system of interest. The processes of testing
a simulation are known as verification and validation.
Verification is ensuring that a code accurately reproduces the desired theoretical
model being used to describe a physical situation. Validation is ensuring that the cho-
sen physical model accurately represents the physics of the situation of interest[49] . In
verifying a code one uses methods such as plausibility checks, back of the envelope calcu-
lations, rigorous examination of output for known cases, echoing of input upon output,
and comparison with codes made for a similar purpose. In validating a code one per-
forms a controlled, usually simple, experiment whose results are robust and compares the
experimental results to those of a simulation replicating that experiment. The actions
taken to verify and validate the simulations presented in this study are given below.
4.1 Verification
Verification has two main classes: internal verification and external verification. Internal
verification checks the output of a given code against its input to ensure the desired
model was properly simulated. External verification, or benchmarking, compares the
results of simulations performed with separate codes that have identical, or as identical
as possible, conditions. Internal verification was performed here for the MCNPX and
25
40. GEANT4 simulators and benchmarking was performed for GEANT4 using MCNPX and
CASINO.
MCNPX prints a multitude of information in its output files that can be used in
verifying a simulation. The first set of information described aids in determining if
the simulation setup is correct. The input file used to generate a given output file is
included at the beginning of the output so that it is very clear which simulation led to
which results. Warnings are sometimes listed here when errors are made in constructing
the geometry, though errors that elicit these warnings often stop the simulation from
running. The volume, material, and importance of each cell is listed in Print Table 60
so that one can be sure their geometry is as intended. Print Table 126 provides the
user with information on the total number of particle interactions occurring in a given
cell. While this information doesn’t necessarily indicate the simulation was correct, it
can provide a quick confirmation that something is very wrong, e.g. if a given cell has an
anomalously high relative number of interactions. Arguably the most important check
on the simulation setup is the visualization, where the general orientation of objects in
the simulation can be quickly confirmed.
A second set of MCNPX output information that can be used for verification indicates
that statistical validity of the entire Monte Carlo simulation as a whole. While these
checks do not guarantee statistical validity, they do provide supporting evidence that
an ample amount of events have been run to ensure convergence. The tally information
√
given includes a statistical error assigned to each bin as N , where N is the number
of counts in a bin. The MCNPX manual suggests that no bin have an error greater
than 10% and provides near assurance of convergence if no bin has an error greater than
1%. These errors are used to provide information following the tallies on the validity
of the simulation using 10 statistical checks, which are described in the MCNPX user
manual. In practice final simulations should be run to satisfy all suggested statistical
conditions, however such simulations are computationally expensive. Thus a practical
solution during development is to reduce the number of events by a factor of 10 for
the initially working simulation until the results no loner agree. Subsequent simulations
run during development should be run with a much smaller number of events, though
occasional checks should be made to ensure these results are convergent with a longer
simulation. For example, simulations performed in this study often satisfied all statistical
checks for 1 × 107 events, but they provided the same results for 1 × 105 events. So many
of the simulations during development were run with the shorter number of events, but
those for final results were run with the greater number of events.
While equally thorough verification output is possible with GEANT4, it was not
implemented in this study in order to simplify coding of the output. Internal verification
consisted mainly of checks with the visualization as well as ensuring the output was
reasonable. An example of reasonable output being roughly even total energy depositions
for all detector strips in a given hemisphere when the source is isotropic. Another more
26
41. (a) MCNPX Verification Setup (b) GEANT4 Verification Setup
Figure 4.1: Near identical initial conditions used for verification simulations.
rigid check is ensuring the energy deposited in a detector is never greater than the initial
energy of the source particle.
Once MCNPX and GEANT4 were separately internally verified, GEANT4 was bench-
marked with MCNPX. With as identical conditions as possible, several simulations were
performed. The physical setup of the simulations consisted of an Aluminum cylinder sur-
rounding a double-sided Silicon strip detector (DSSD), with a vacuum in the open space.
The cylinder has a length l = 16.32cm, inner radius RI = 7.5cm and outer radius RO =
7.62cm and the DSSD has dimensions length × width × height = 4cm×4cm×0.05cm,
where the long axis of the cylinder and the height of the DSSD are oriented along the
z-direction. The plane of the closest cylinder end is 8.27cm from the closest DSSD surface
and the centroid of the DSSD is at (x,y)=(0,0) with respect to the cylinder’s coordinates.
The source is located at the center, (x,y,z)=(0,0,0), of the cylinder emitting monoener-
getic electrons isotropically. (See Figure 4.1.)
Each verification simulation was carried out with a single electron energy. Energies
chosen were 0.481, 0.553, 0.565, 0.975, 1.000, 1.047, 1.059,and 1.682MeV. 1.000MeV was
chosen arbitrarily, while the other energies were chosen because they are each included
in the source for the validation experiment. No detector resolution was included in the
simulations. For all energies the simulations had excellent agreement, except for the
marked divergence below 0.17MeV, where MCNPX generally had twice the counts of
GEANT4. As an example the counts vs deposited energy are shown for the low, middle,
and high initial source energies simulated in Figure 4.2, where each simulation had 1×106
initial source events with an overall isotropic distribution.
GEANT4 was verified in a very general way with CASINO. As identical simulation
conditions could not be reproduced, instead CASINO was used to check if it was plausible
27
42. (a) 0.481MeV e− (b) 1.000MeV e− (c) 1.682MeV e−
Figure 4.2: General agreement between MCNPX and GEANT4 verification simulations.
that electrons of a given energy could be stopped in Silicon of a given thickness. To
mimic the DSSD, the material chosen in the CASINO verification simulation was 500µm
of Silicon. The initial electron energy was chosen to be 975keV, as this is the energy of
primary importance in the validation experiment, and the initial angle was chosen to be
50◦ , admittedly an extreme case. This simulation confirms that some of these electrons
can and are stopped in the Silicon, as is shown by the energy deposited by depth in
Figure 4.3. Here it is shown that 90% of the electron’s energy are deposited within the
red contour when an electron enters the Silicon at an angle of 50◦ , well within a depth
of 500µm, the thickness of the DSSD. (A useful paper for interpreting CASINO plots is
[50].)
4.2 Validation
207
4.2.1 Validation Experiment: Bi Calibration Source
Validation was performed using an experiment whose basic setup was identical to that
described for the verification simulations. The source used was 207 Bi which predominantly
emits electrons with energies 0.481, 0.553, 0.565, 0.976, 1.048, 1.059, and 1.682MeV with
probabilities 13.1, 3.8, 1.3, 60.9, 16.0, 4.7, and 0.2%, respectively[46] , where probabilities
are normalized to only include these electrons. Electron energies which 207 Bi also emits
were not included due to their small probability of emission. For example, the most
frequent electron energy that was not simulated is MeV, which is emitted once in every
2 × 103 times the 976keV electron is emitted. (See [46] for a full characterization of the
source). The DSSD, a type “BB1” purchased from MicronSemiconductor[51] , was held
up by four aluminum rods which extended from one of the cylinder end caps. The data
collected was relative energy deposited by electrons in the DSSD, which was recorded
28
43. Figure 4.3: Evidence 976MeV e− can be stopped in 500µm of Si.
by collecting electron-hole pairs created as the electron passed through the Silicon. A
voltage of 50V was applied across the detector so that charges from all parts of the
detector could be collected. An engineering drawing of the actual setup, without the
surrounding cylinder, is shown in Figure 4.4.
4.2.2 Replication via Simulation
In the simulation many simplifications were made, however it seems they are justified.
Source energies mentioned in the previous subsection are not all of the electron energies
emitted by 207 Bi, however other physical energies are emitted with a relatively low proba-
bility. Additionally 207 Bi emits photons, but it was found these do not significantly affect
the energy spectra. A full characterization of 207 Bi can be found at [46]. As is apparent
from comparing Figure 4.4 and 4.1, numerous approximations were made in creating the
simulation geometry. Ultimately only the Aluminum cylinder and the DSSD were in-
cluded because it was found that backscattering of electrons off the Aluminum chamber
had little impact on simulation results. So it was then assumed that the aluminum rods
supporting the DSSD would have an even smaller effect.
29
44. Figure 4.4: Engineering drawing of Beta Counting Station[52] .
4.2.3 Comparison Between Simulation and Experiment
In order to compare data from the validation experiment and simulations of that ex-
periment, a calibration had to be performed to the DSSD. This was necessary because
the energy that is recorded is relative and is binned into “channels”. Channel to energy
calibration was performed by M. del Santo by recording the channels the recorded energy
deposition for several sources of known energy. M. del Santo additionally performed a
calibration using Compton scattering. Here photons from a source that mainly emits a
photon of a given energy is allowed to pass through the DSSD and is detected afterward
by a germanium detector. The change in the photon’s angle and energy is sufficient to
obtain the energy imparted to the detector, as is shown schematically in Figure 4.5.
The channel to energy calibration ultimately resulted in a 4th order polynomial func-
tion that could be applied to the data. Adjustments were also made to the gain applied
to the data and the total number of simulation events was designed such that the total
number of recorded events would match the data. The resulting calibration function
applied to the data in the presented results was
Energy(x = Channel) = G ∗ (p0 + p1 ∗ x + p2 ∗ x2 + p3 ∗ x3 + p4 ∗ x4 ), (4.1)
where, to two decimal places, G = 8.6 × 102 and p0,1,2,3,4 = 0.0, 4.90 × 10−3 , 2.02 ×
10−4 , −6.94 × 10−7 , and 8.15 × 10−10 , respectively. The GEANT4 simulation used for
comparison had 1 × 105 source events, but the resulting data bins were multiplied by a
factor of 25 to have the same overall counts. The resulting comparison is shown in Figure
4.6.
30
45. Figure 4.5: Formula and schematic for Compton scattering.
Note the general agreement between the GEANT4 simulation and the experimental
data. Relative count peak heights are reproduced very well as are their profiles. It is
apparent that the relative spacing between the two most prominent peaks is not consistent
between simulation and experiment. This suggests that perhaps adjustments need to be
made to the calibration function or maybe more calibration data is required in the energy
range of interest.
4.3 Potential Further Verification and Validation
Ideally more verification and validation checks will be performed to ensure the accuracy
of the code.
Regarding additional verification of the GEANT4 simulations, there are many options.
The first, and perhaps most obvious, would be to upgrade the output so that it includes
information on the volume and coordinates of simulation components. It would also
be beneficial to add statistical errors to the output, as it is done in MCNPX, so that
the statistical validity of a simulation will be more apparent. Arguably the most useful
additional verification would be benchmarking for more situations and with other codes.
Ideally benchmarking would have been performed for a dual positron and proton
emission as well, as for β-delayed proton emission, which is the focus of this study.
However, it was found that non-physical behaviour occurs for MCNPX when positrons are
emitted within a detector of small volume. As an example of nonphysical behavior, note
the double-peaked mean energy deposition for a 1MeV isotropic electron emitting source
31
46. Figure 4.6: Experimental data and GEANT4 simulation comparison for validation.
centrally located within in a DSSD shown in Figure 4.7. These simulations could not
be performed with CASINO as it only simulates the trajectory of electrons in materials.
For benchmarking this type of physics, the author has recently become aware of the
FLUKA[55],[56] simulation package, which seems well suited due to its versatility and
well developed user support.
Regarding additional validation, any number of experiments could be performed. The
most useful experiments would use another discrete electron source or an alpha source
or a proton beam. An alpha source was not simulated in this study. Being that the
proton, for which the alpha calibrates, is generally fully stopped within the DSSD, it was
assumed here that the centroid of its energy distribution would be centered around its
full energy with a full width half maximum of the detector’s resolution. As experimental
information provided this full width half maximum, it seemed unnecessary to perform
a simulation to confirm this result as the full width half maximum of detector response
is given to the simulation as input. A proton beam was not simulated because it seems
unlikely that the DSSD will be taken to a facility with a proton beam in the near future.
32
47. 1MeV e- centrally emitted from 0.5 X 1 X 40mm Si strip
600000
1e7 runs, perfect resolution
500000
400000
Counts (total: 1e7)
300000
200000
100000
0
0 0.2 0.4 0.6 0.8 1
Pulse Height (MeV), 10keV bins
Figure 4.7: Unphysical behavior by MCNPX when simulating an e− in a single Si strip.
33
49. Chapter 5
Prediction of Experimental Results
Prior to describing the experiment and simulation that were the focus of this study,
the purpose will be briefly rehashed. Recall that the goal is to identify levels in 69 Br,
particularly the ground state, that lay below its isobaric analogs state (IAS). The proton
emitted from the ground state of 69 Br is of particular interest because it will allow us
to determine a ground state mass for the nucleus, which can be used to experimentally
assign a proton capture Q-value to the rp-process waiting point nucleus 68 Se. The method
which will be used to measure the proton’s energy is the detection of β-delayed proton
emission by 69 Kr, which will inherently sum the energy of the emitted positron with the
energy of the proton, upon detection. The experiment being described to accomplish this
task is scheduled to be performed by Marcelo del Santo, accompanied by the research
group of Hendrik Schatz, from May 10 to May 18 (roughly one week from this writing)
as experiment 07025 at the National Superconducting Cyclotron Laboratory (NSCL).
69
5.1 Kr Experiment
As 69 Kr has a half-life of t 1 = 32ms[46] , it must be produced on site at the NSCL. An ion
2
source produces 78 Kr, which is accelerated by the coupled K500 and K1200 cyclotrons[57] ,
schematically shown in Figure 1.3, to an energy of 150 Mu (MeV per Nucleon). At 25
eV
particle-nanoamps (pnA), the beam will be fragmented by a Beryllium target to produce
69
Kr and along with some contaminants. Most of the contaminants will then be removed
as the beam passes through the A1900 fragment separator[30] and the Radio Frequency
Fragment Separator[58] (RF Kicker). Finally the beam will pass through three single
sided PIN Silicon detectors and one DSSD, implanting 69 Kr in a second DSSD which has
behind it a third DSSD and a fourth PIN detector. Collectively these Silicon detectors
are the Beta Counting Station (BCS)[52] . This is shown schematically in Figure 5.1. Not
shown is the Segmented Germanium Array (SeGA)[59] which will surround the BCS.
35
50. Figure 5.1: Experimental set up for experiment 07025 at the NSCL.
The data of primary interest in the experiment is the energy deposited in the BCS
and SeGA. The BCS will collect energy from positrons and protons resultant from 69 Kr
β-delayed proton emission and SeGA will collect energy from photons emitted in the
de-excitation of 69 Br. As each system will also detect radiation from background and
contaminants, gates have been devised so that events having decay signals in the BCS
and SeGA in coincidence can be isolated so as to effectively remove the majority of
contaminants and background for decay branches that pass through an excited state
of 69 Br. For the ground state decay branch one would not expect a photon. Working
from the assumption that said gating will be effective, the simulation in this study only
examines the effects of radiation from β-delayed proton emission in the BCS.
5.2 Simulation and Results
As was the case for the validation experiment, simplifications were made to approximate
the system of interest. Indeed the setup simulated here is identical to that described in
section 4.2.2 and shown in Figure 4.1, with the only difference being the source and its
location. Here the source is located within the DSSD and emits a positron and then a
proton, so as to replicate β-delayed proton emission of 69 Kr. A time delay between the
positron and proton emission is not included, as this time in reality will be undetectable
using the given experimental setup.
Any number of decay branches could have been simulated, however here we only
examined the decay through the IAS and the lower limit for the ground state of 69 Br.
The IAS branch was chosen because it has been previously observed[5] . The ground
36
51. state decay branch was simulated because extracting its energy is the main goal of the
experiment and the lower limit was chosen because it provides the lowest energy signature,
closest to detector threshold, we expect to detect.
The simulation goes through the IAS branch 83% of the time, as this was the branch-
ing ratio assigned by [5]. Upon selecting the decay branch, the rejection method[53] is
used to select the energy of the emitted positron according to its β-spectrum, which is
defined by the decay’s Q-value. Recall that the positrons can have different energies for a
decay of a given Q-value because β-decay is a three-body reaction and thus the positron
and electron neutrino may divide the total available kinetic energy differently each time
(see Figure 5.2). The β-spectrum was calculated using the simple relation[4]
dPE dp
dE
= ξ ∗ dP ∗ dE
dp
dPE
dE
= ξ ∗ p2 ∗ (Q − E)2 ∗ dE
dp
dPE 2
dE
= ξ ∗ p ∗ (Q − E) ∗ (E + me )
where PE is the probability of emitting a positron of a given kinetic energy, E is an
energy selected randomly from E = 0 to E = Q, Q is the Q-value of the decay, p is
the momentum, me is the mass of a positron, and ξ is a normalization constant[4] that
contains cancelling units and brings the maximum probability to 1. If PE is greater
than a random number selected from 0 to a number greater than Pmaximum then the
positron is fired with energy E. A Coulomb correction factor[54] could have been added,
however this would require integration of the probability distribution each event and,
upon comparison, the distribution was negligibly different for the purposes of this study.
Following positron emission, a proton is emitted with an energy in accordance to the
decay branch chosen. Thus, roughly 83% of the time a positron will be emitted with with
an energy defined by a 10.069MeV Q-value, followed by a 4.07MeV proton, and the rest
of the time a positron will be emitted with an energy defined by a 14.019MeV Q-value,
followed by a 0.50MeV proton.
The simulation result that was primarily investigated was the total energy deposited
in the DSSD in a decay event compared to the energy deposited only by the event’s
proton. More specifically, the comparison made was the difference between the centroids
of the peaks for these energy deposition distributions, as this is the information used to
correct for the summing effect. For the results shown (see Figures 5.3 and 5.4) the energy
deposited was segmented into 0.035MeV bins and a detector resolution of 0.175MeV,
experimentally determined by M. del Santo, was applied at all energies.
One can see that the shift in the centroid of the energy deposition distribution due
to β + summing is 0.215MeV for both decay branches. This result indicates that this
shift will apply to all detected protons. Consequently, it seems unnecessary to simulate
additional decay branches until the experiment is performed. Once peaks in energy
deposition from β-delayed proton emission are identified in data analysis, subtracting
37
52. Figure 5.2: Characteristics of a 14MeV Q-value β + -decay.
0.215MeV from the peak’s mean value should yield a more accurate proton energy and, in
the case of the ground state decay branch, a more accurate ground state mass of 69 Br that
can be used to help determine 68 Se’s proton capture Q-value. Note the asymmetric shape
of the summed energy peak. This asymmetry will allow for the disentanglement of the
summing effect from detector resolution, which spreads the energy peak symmetrically,
in the analysis of the experimental data.
5.3 Future Use of Simulations for Data Analysis
More simulations will be performed to ensure a swift and accurate data analysis. Regard-
ing the 69 Kr simulations, it will be important to simulate various combinations of decay
branches with different energies and ratios until the experimental results are reproduced.
Of primary importance is the reproduction of the IAS decay branch, as the energy of
these protons is known. Additionally, simulations of 22 Si and 23 Si could be performed, as
these β-delayed proton emitters have recently been measured at the NSCL. Regarding
all simulations, a significant improvement would be to make the simulation setup more
realistic with the addition of the surrounding DSSDs. It will then be possible to find
correlations between the implantation location in the central DSSD and the location of
positron detection in the surrounding DSSDs. Should the results find some group of
strips in a surrounding DSSD detects a positron more often than others, this information
can be used to gate on the more relevant strips and consequently speed the analysis
38
53. 69
Figure 5.3: Summing effect for IAS and proposed ground state decay branch for Kr
β-delayed proton emission (blue=proton, red=sum).
process. Ultimately many more simulations may be performed, however this depends on
the needs of the team involved in data analysis.
39
54. (a) Ground state decay branch (b) IAS decay branch
Figure 5.4: Closeup of the summing effect for simulated decay branches.
40
55. Bibliography
[1] http://www.phy.ornl.gov/hribf/science/abc/
[2] Van Wormer, L. et al. Astrophysical Journal. 432 (1994), 326
[3] Gottfried, K. & Yan, T. Quantum Mechanics: Fundamentals. New York: Springer-
Verlag, 2003
[4] Martin, B. Introduction to Nuclear Physics. West Sussex, United Kingdom: Wiley &
Sons, 2009
[5] Xu, X.J. et al. Physical Review C. 55 (1997), 2, R553
[6] Lima, G.F. et al. Physical Review C. 65 (2002), 044618
[7] Schatz, H. et al. Private Communication
[8] Kramer, K. Introductory Nuclear Physics. Hoboken, New Jersey: Wiley & Sons, 1988
[9] Schatz, H. et al. Physics Reports. 294 (1998), 167
[10] Tuli, J. NNDC 2007 Nuclear Wallet Cards http:://www.nndc.bnl.gov/wallet
[11] Smith, K. Private Communication
[12] Stolz, A. et al. Nuclear Instruments & Methods B. 241 (2005), 1, 858
[13] Nunes, F. & Thompson, I. Nuclear Reactions for Astrophysics. New York: Cam-
bridge University Press, 2009
[14] del Santo, M. Private Communication
[15] Hester, J. & Scowen, P. (Arizona State University) & NASA.
http://hubblesite.org/newscenter/archive/releases/1996/22/
[16] Suess, H. & Urey, H. Reviews of Modern Physics 28 (1956), 1, 53
41
56. [17] Burbidge, E. et al. Reviews of Modern Physics 29 (1957), 4, 547
[18] Cameron, A. Publications of the Astronomical Society of the Pacific 69 (1957), 201
[19] Schatz, H. & Rehm, K. Nuclear Physics A 777 (2006), 601
[20] Grindlay, J. Comments on Astrophysics 6 (1976), 165
[21] Evans, W. et al. Astrophysical Journal 206 (1976), L135
[22] Hansen, C. & van Horn, H. Astrophysical Journal 195 (1975), 735
[23] Woosley, S. & Taam, R. Nature 263 (1976), 101
[24] Joss, P. & Rappaport, S. Nature 265 (1977), 222
[25] http://essayweb.net/astronomy/blackhole.shtml
[26] Iliadis, C. Nuclear Physics of Stars Berlin: Wiley-VCH, 2007.
[27] Weiss, M. & NASA Chandra X-ray Space Telescope.
http://chandra.harvard.edu/photo/2001v1494aql/index.html
[28] Maurer, I. & Watts, A. Monthly Notices of the Royal Astronomical Society 383
(2008), 387
[29] Galloway, D. et al. Astrophysical Journal Supplement Series 179 (2008), 360
[30] Steiner, A. et al. Physics Reports 411 (2005), 6, 325
[31] Cyburt, R. et al. Currently under review by Astrophysical Journal Supplements
Series
[32] Meisel, Z. et al. Proceedings of the 10th Symposium on Nuclei in the Cosmos (2008),
173
[33] Van Wormer, L. et al. Astrophysical Journal 432 (1994), 326
[34] Schatz, H. et al. Proceedings of the American Chemical Society symposium: Origins
of Elements in the Solar System: Implications of Post 1957 Observations (2000), 153
[35] Smith, K. et al. Proceedings of the 10th Symposium on Nuclei in the Cosmos (2008),
178
[36] https://mcnpx.lanl.gov/
[37] http://www.geant4.org/geant4/
42
57. [38] http://www.gel.usherbrooke.ca/casino/index.html
[39] http://www-rsicc.ornl.gov/
[40] http://www.mcnpvised.com/
[41] Agostinelli, S. et al. Nuclear Instruments and Methods in Physics Research A 506
(2003), 3, 250
[42] Allison, J. et al. IEEE Transactions on Nuclear Science 53 (2006), 1, 270
[43] http://geant4.web.cern.ch/geant4/support/index.shtml
[44] http://www.lcsim.org/software/geant4/doxygen/html/index.html
[45] http://www.vias.org/pngguide/chapter06 08.html
[46] http://www.nndc.bnl.gov/
[47] http://root.cern.ch/drupal/
[48] Drouin, D. Microscopy and Microanalysis 12 (2006), S02, 1512
[49] Post, D. & Votta, L. Physics Today Jan. 2005, 35
[50] Drouin, D. et al. Scanning 29 (2007), 92
[51] http://www.micronsemiconductor.co.uk/pdf/bb.pdf
[52] Prisciandaro, J. et al. Nuclear Instruments & Methods A 505 (2003), 1, 140
[53] Press, W. et al. Numerical Recipes in C, 2nd Ed. New York: Cambridge University
Press, 1992. (p.290)
[54] Blatt, J. & Weisskopf V. Theoretical Nuclear Physics. New York: Springer-Verlag,
1979.
[55] http://www.fluka.org/fluka.php
[56] Fass`, A. et al. CERN-2005-10 (2005), INFN/TC 05/11 SLAC-R-773
o
[57] http://www.nscl.msu.edu/tech/accelerators
[58] Bazin, D. et al. Nuclear Instruments & Methods A 606 (2009), 3, 314
43
58. [59] http://www.nscl.msu.edu/files/sega sld 2007.pdf
This thesis was prepared using the L TEX typesetting language [60, 61].
A
[60] L. Lamport, 1985 Addison-Wesley, Boston, “LTEX: A Document preparation Sys-
A
tem”
[61] D. E. Knuth, 1985 Addison-Wesley, Boston, “The TEXbook”
44
59. Appendix: Code
The following Code is for final simulations performed in this study. Code for MCNPX
simulates the validation experiment described in chapter 4. The file path for this work
is: /projects/jina/jinalib/meisel/Thesis69Br/MCNP/Comp2Geant/. Code for GEANT4
simulates the experiment described in chapter 5. The file path for this work is: /project-
s/jina/jinalib/meisel/Thesis69Br/GEANT/SimpleSetup/. Code for all simulations are
available and one can contact the author for access to these.
5.4 MCNPX
1 207 Bi t o w a r d s 40 s t r i p DSSD
2 c cells
3 c
4 c Exterior void
5 1 0 2
6 c
7 c DSSD s t r i p s
8 2 1 −2.329 −1 −4
9 3 1 −2.329 −1 4 −5
10 4 1 −2.329 −1 5 −6
11 5 1 −2.329 −1 6 −7
12 6 1 −2.329 −1 7 −8
13 7 1 −2.329 −1 8 −9
14 8 1 −2.329 −1 9 −10
15 9 1 −2.329 −1 10 −11
16 10 1 −2.329 −1 11 −12
17 11 1 −2.329 −1 12 −13
18 12 1 −2.329 −1 13 −14
19 13 1 −2.329 −1 14 −15
20 14 1 −2.329 −1 15 −16
21 15 1 −2.329 −1 16 −17
22 16 1 −2.329 −1 17 −18
23 17 1 −2.329 −1 18 −19
24 18 1 −2.329 −1 19 −20
25 19 1 −2.329 −1 20 −21
26 20 1 −2.329 −1 21 −22
27 21 1 −2.329 −1 22 −23
28 22 1 −2.329 −1 23 −24
29 23 1 −2.329 −1 24 −25
30 24 1 −2.329 −1 25 −26
31 25 1 −2.329 −1 26 −27
45