SlideShare a Scribd company logo
1 of 82
Download to read offline
Faculty of Sciences
Department of Physics and Astronomy
Polychromatic Monte Carlo dust
radiative transfer
Tine Geldof
Promoter: Prof. Dr. Maarten Baes
Copromoter: Peter Camps
A Thesis submitted for the degree of Master in Physics and Astronomy
Year 2013 - 2014
Contents
1 Introduction 7
1.1 Interstellar dust in galaxies . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.2 The SKIRT program . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
1.2.1 Dust radiative transfer in SKIRT . . . . . . . . . . . . . . . . . . . 10
1.2.2 Optimization techniques . . . . . . . . . . . . . . . . . . . . . . . . 12
1.3 Polychromatism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
1.3.1 Polychromatism in SUNRISE . . . . . . . . . . . . . . . . . . . . . 16
1.3.2 Polychromatism in SKIRT . . . . . . . . . . . . . . . . . . . . . . . 18
1.3.3 Problem and solution . . . . . . . . . . . . . . . . . . . . . . . . . 20
1.4 Overview of the symbols . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
2 Implementation of the polychromatic photon packages 23
2.1 The different stages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
2.2 Stellar emission of the photon packages . . . . . . . . . . . . . . . . . . . 26
2.3 Propagation through the dust . . . . . . . . . . . . . . . . . . . . . . . . . 29
2.4 Thermal dust emission . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
2.5 The dust self-absorption . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
2.6 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
3 Findings 37
3.1 Stellar model without dust: bi-Plummer model . . . . . . . . . . . . . . . 37
3.2 Stellar model with dust component . . . . . . . . . . . . . . . . . . . . . . 40
3.2.1 Launching of the photon packages . . . . . . . . . . . . . . . . . . 41
3.2.2 Absorption of the photon packages . . . . . . . . . . . . . . . . . 42
3.2.3 Scattering of the photon packages . . . . . . . . . . . . . . . . . . 47
3.3 Optically thin and thick models . . . . . . . . . . . . . . . . . . . . . . . . 51
3.4 Splitting of the photon packages . . . . . . . . . . . . . . . . . . . . . . . 55
3.4.1 The concept of photon splitting . . . . . . . . . . . . . . . . . . . . 55
3.4.2 Determination of the optimal biasing limit . . . . . . . . . . . . . 57
3.4.3 Accuracy of the results . . . . . . . . . . . . . . . . . . . . . . . . . 63
3.5 Effects of dust self-absorption and emission . . . . . . . . . . . . . . . . . 65
3.5.1 Thermal dust emission . . . . . . . . . . . . . . . . . . . . . . . . . 65
3.5.2 Reaching internal equilibrium . . . . . . . . . . . . . . . . . . . . 69
3.6 RGB images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
3.7 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
4 Discussion and future prospects 75
5 Discussie 77
2
FACULTEIT WETENSCHAPPEN
Vakgroep Vaste-Stofwetenschappen
Faculteit Wetenschappen – Vakgroep Vaste-Stofwetenschappen
Krijgslaan 281 S1, B-9000 Gent www.UGent.be
Geachte heer/mevrouw,
Ik wil hierbij graag de toelating geven aan mevrouw Tine Geldof, studente Master Fysica &
Sterrenkunde aan de Universiteit Gent, om haar masterthesis in het Engels te schrijven,
gezien het internationale karakter van het thesiswerk.
Ik hoop u hiermee van dienst te zijn.
Met de meeste hoogachting,
Prof. Dr. Dirk Poelman
Voorzitter examencommissie Fysica & Sterrenkunde
uw kenmerk
xxxxx
contactpersoon
Dirk Poelman
ons kenmerk
xxxxx
e-mail
dirk.poelman@UGent.be
datum
22-04-2014
tel. en fax
T +32 9 264 43 67
F +32 9 264 49 96
4
Acknowledgement
This master thesis is the final result of my education at the Ghent University in order
to achieve the degree of Master in Physics and Astronomy. Working on a thesis project
requires a lot of time, energy and perseverance, but is especially intellectually stimulat-
ing and challenging. This is why I would like to thank my promoter Prof. Dr. Maarten
Baes, as he gave me the opportunity and courage to work on this project. I would like
to thank him for his guidance throughout the past year and in particular his ongoing
enthusiastic supervision.
I would like to thank PhD student Peter Camps as well, who helped me at all stages
of my master thesis. In particular his insights about the working of the SKIRT program
and the Qt Creator development. It was truly a pleasure to work under his guidance.
I would also like to thank my fellow student Sam Verstocken, for his insights and help
with whatever small problem I had, and my family and other colleagues for coping
with my moments of stress during the past year.
Tine Geldof
Summary
The study of special dusty astrophysical objects in the universe - such as spiral and el-
liptical galaxies, galaxy clusters and even galaxy clouds - is an important development
in the astrophysical research of today. SKIRT, acronym for Stellar Kinematics Including
Radiative Transfer, is an advanced 3D continuum radiative transfer code based on the
Monte Carlo algorithm. A simulation consists of consecutively following the individ-
ual path of each single photon package trough the dusty medium. SKIRT is one of the
programs that simulates and studies these astrophysical objects in detail and is devel-
oped by the UGent astronomy department. It currently uses monochromatic photon
packages and describes their life cycle during their propagation.
The purpose of this project is to introduce an optimization technique that uses poly-
chromatic photon packages in SKIRT. The main goal is to implement this method and
investigate the advantages and disadvantages this implementation will cause. One of
them being a reduced noise in the color images of the systems. With this technique,
we hope to retrieve accurate results and improve the SKIRT code. On the other hand,
inevitably problems will be encountered and discussed during this report, due to the
wavelength dependence of different astrophysical parameters used in the calculations.
The stepwise implementation will be tested each time with some simple astrophysi-
cal models by comparing the codes containing the monochromatic and polychromatic
photon packages to one and other. At first the stellar model will be tested without a
dusty medium. Further on dust will be inserted to verify the scattering and extinction
of the photon packages. A third step is to include dust emission and dust self absorp-
tion - the absorption of photon packages that are emitted by the dust itself - in the
simulation. The improved program will then be tested on several models with the aim
of investigating the accuracy of the results.
The source code of the adjustments in SKIRT, written in C++, will not be included in this
report. More information about the program, the SKIRT documentation and the down-
loading and installation guide, can be found at the SKIRT website: www.skirt.ugent.be.
Introduction
1
1.1 Interstellar dust in galaxies
In order to study the structure as well as the kinematics of galaxies, we have to obtain
their intrinsic three-dimensional distributions by deprojecting observed 2-dimensional
images. This deprojection is no easy task and is complicated due to the effects of inter-
stellar dust. This constituent of the interstellar medium consists of a variety of macro-
scopic solid particles, mainly carbons and silicates. Although dust forms merely a
small fraction (about 1% or less) of the total amount of matter within galaxies, it is
nevertheless a very import constituent as it affects the starlight on its way trough the
galaxy. Hence, interstellar dust affects the projections of the light distributions, giv-
ing us a distorted view of the investigated galaxies. It will make many regions of
a galaxy opaque, as the stellar photons will interact with the different types of dust
grains through the physical processes of absorption and scattering. The efficiency
of these two processes, from which the combination is called an extinction process,
strongly depends on the wavelength of the light which is transmitted and the prop-
erties of the dust grains, e.g. their macroscopic sizes, compositions. A portion of the
stellar radiation, which covers the UV and optical wavelengths, will be converted to IR
and sub-millimeter radiation. In some special galaxies, it seems that even about 99%
of the stellar light is converted to these redder wavelengths [Baes, 2012].
Just as the light profiles of the galaxies are severely affected by the dust, the projected
kinematics are too. The stellar kinematics of a galaxy refers to the fact that the stars
within the system move in particular orbits. For each star, it is possible to determine
their three-dimensional position and velocity. It seems that photons originating from,
for example, high velocity stars in the center of opaque regions can’t reach the observer,
hence they will not contribute to the line-of-sight velocity distribution (LOSVD) of the
galaxy’s spectrum. The kinematics of the galaxy will in this case be biased towards
lower line-of-sight velocities [Baes, 2001].
The dust grains within the interstellar medium are found in a broad range of differ-
ent types, with varying sizes, compositions, etc. In performing simulations of real-
istic models, it is necessary to account for these different dust mixtures. This can be
done by defining a number of dust types, consisting of different chemical composi-
tions, densities, sizes and shapes. Each dust type will consequently be characterized
by a specific absorption coefficient, scattering coefficient and scattering phase function
[Steinacker et al., 2013]. These dust properties will be explained later on in this thesis.
Accounting for different dust mixtures however complicates the deprojection calcula-
tions even more and will hence not be considered in this project.
A spiral galaxy viewed from its side, at an inclination of nearly 90 degrees, is referred
to as an edge-on spiral galaxy. A special feature of these galaxies is that they seem to
be optically thick in their central regions and optically thin in the outer regions. The
reason for this is that a spiral galaxy contains interstellar dust that predominantly lies
within a thin disk narrower than the stellar disk [Baes, 2012].
Edge-on spiral systems can easily be used to investigate the effects of dust on their ra-
diation field, as the dust extinction in the plane of this galaxies will add up along the
line of sight. This results in prominent dust lanes which are visible as thin, darkened
bands in the UV or optical window, running through the galaxy’s center. This feature
is clearly displayed in figure 1.1, where the bulge fraction and the amount of dust is
varied from the top left to the bottom right. When increasing the dust fraction in the
disk of the galaxy, the dust lane becomes clearer. Due to the typical thermal emission of
the dust grains, these dusty bands can also be seen at infrared or sub-millimeter wave-
lengths when making use of the appropriate telescopes or simulations which include
dust emission.
Figure 1.1: Models of an almost edge-on spiral galaxy with an inclination of 88◦, in which the
effects of extinction is examined. From left to right an increase in the amount of dust in the disk
is shown, while from top to bottom a growing bulge fraction is represented. [Baes, 2012]
8
A sophisticated deprojection technique that takes dust absorption, emission and scat-
tering into account makes use of the so called radiative transfer equation (RTE), which
in a macroscopic way describes the interaction between matter and radiation. Solving
this equation is called the RT problem, as it governs the physical process of depro-
jection. As information is given in an observed image on the plane of the sky, i.e. a
two-dimensional projection of a 3D structure, the inverse RT problem must be solved
to obtain the 3D distribution of the examined stellar system. This can be done by a
couple of different numerical methods that make use of the RTE: an iteration method,
a method based on the expansion in spherical harmonics, a discretization method and
a Monte Carlo technique [Baes and Dejonghe, 2002b]. In this project, the Monte Carlo
method is of most importance, as it can be used to investigate more complex geome-
tries. Moreover, the program that is edited and used in this thesis in particular utilizes
the Monte Carlo technique to execute the different simulations.
9
1.2 The SKIRT program
SKIRT, acronym for Stellar Kinematics Including Radiative Transfer, is an advanced 3D
continuum radiative transfer code based on the Monte Carlo algorithm. The first SKIRT
version, written in Fortran 77 and developed by the UGent astronomy department,
was developed in 2001 to study the effect of dust absorption and scattering on the ob-
served kinematics of early-type galaxies [Baes and Dejonghe, 2002a] [Baes et al., 2003].
Later on, new versions of SKIRT where developed in C++ which focused on dust ab-
sorption, scattering and thermal re-emission [Popescu and Tuffs, 2005] [Baes et al., 2005].
The latest version that will be used throughout this report, SKIRT6, contains the most
recent functionalities that can easily be edited and updated with new features in the
Qt creator development environment [Baes et al., 2011]. Furthermore, the code is par-
allelized using Q Threads. This parallelization technique significantly improves the
speed of the complicated calculations performed in the code.
In SKIRT6, the simulation starts from a 3D model for the stellar objects and dusty sys-
tems. The different parameter values describing the model are stored in a ski file, i.e.
a file with the ”.ski” extension. Using the Monte Carlo technique, it can calculate the
intrinsic properties, e.g. the strength of the radiation field, dust temperature distri-
bution, etc., and the observable properties of the models, e.g. images of the galaxies,
SEDs (Spectral Energy Distributions), etc. These calculated results are all outputted via
ASCII files and FITS (Flexible Image Transport System) images.
1.2.1 Dust radiative transfer in SKIRT
The key principle in Monte Carlo radiative transfer simulations is that the radiation
field is treated as a flow of a finite number of luminosity packages, with the entire lu-
minosity divided among these packages. The simulation itself essentially consists of
following the individual path or life cycle of each single photon package trough the
dusty medium [Steinacker et al., 2013].
We can consider one of the many photon packages emitted by a stellar object, consist-
ing of a (large) number of photons with the same wavelength. Note that, from now on,
a single photon package will sometimes be referred to as a photon, while in principle
these two are not the same. In the simplest case, this package can be characterized
by a luminosity Lλ, a position x of the last emission or interaction and a propagation
direction k. The space is divided into a number of cells with a uniform dust density
attached to each cell. The path of the package is given by a straight line until it interacts
with a dust grain -it can be scattered or absorbed- or leaves the dusty medium.The first
Monte Carlo step is to initialize these three properties.
10
Initialization
The initial luminosity is defined trough the total luminosity of the stellar object and the
number of photon packages Npp in the simulation:
Lλ =
Ltot
λ
Npp
(1.1)
This gives every photon package the same luminosity. The initialization of the posi-
tion x and direction of emission k is done randomly. A random position is generated
from the 3D stellar distribution and the propagation direction is defined by choosing a
random position on the unit sphere, as stars typically emit radiation isotropically.
Determination of the interaction point
The second Monte Carlo step is to determine whether the photon package will interact
with a dust cell or will leave the system. To do this, we need the specific intensity
Iλ, a conserved quantity that is defined as the intensity per unit of wavelength. The
specific intensity represents the amount of energy that is carried by radiation for a
given wavelength per unit of solid angle and per unit of time, crossing a unit area
perpendicular to the propagation direction. It is thus defined by:
dE = IλdA⊥dΩdλdt
The radiative transfer equation for one photon package will be given by:
dIλ
ds
= −κλρIλ (1.2)
where κλ is the extinction coefficient of the dust at a given wavelength, ρ is the dust
density and s is the path-length covered along direction k from the starting point x.
The interaction coefficient κλ corresponds to the opacity of the dust, which can be in-
terpreted as the impenetrability of the medium and has a contribution of the absorption
and scattering processes, hence κλ = κsca
λ + κabs
λ . The radiative transfer equation 1.2 or
RTE can be solved as:
Iλ(s) = Iλ(0)exp(−τλ) (1.3)
Where we have introduced the optical depth along this particular path as:
τλ(s) =
s
0
κλρ(s )ds (1.4)
11
A random interaction point can now be determined by generating a random optical
depth τλ from the probability distribution function (PDF), that is given by an exponen-
tial distribution:
p(τλ)dτλ = e−τλ dτλ (1.5)
Comparing this random optical depth τλ with the maximal optical depth τλ,path, given
by equation 1.4 with s going to infinity, determines whether the photon package will
interact or not. If τλ > τλ,path, there will be no interaction and hence the photon pack-
age will leave the system. If τλ < τλ,path, an interaction will take place at position
xint = x + sk, where the physical path length s will be determined by the inverse
formula of equation 1.4.
Nature of the interaction
Deciding the nature of the interaction, i.e. whether a scattering or absorption process
takes place, is done by defining a fraction aλ given by:
aλ =
κsca
λ
κsca
λ + κabs
λ
, (1.6)
where κsca
λ is the scattering co¨efficient of the dust and κabs
λ the absorption co¨efficient.
The fraction aλ is called the dust grain albedo. The nature of the interaction is in this
case not chosen randomly. Instead, the photon package will be split into two parts. One
part with weight 1 − aλ is absorbed such that a luminosity (1 − aλ)Lλ is stored in the
absorbed luminosity counter of the particular cell where the interaction happens. The
remaining part with weight aλ is scattered and the Monte Carlo loop for the photon
package will continue with a reduced luminosity of aλLλ. A new random direction
will be generated and the propagation of the luminosity package continues on a new
path. When a photon package holds only 0.01% or less of its original luminosity, the
package vanishes and its life cycle is ended [Baes et al., 2011].
1.2.2 Optimization techniques
As the simple Monte Carlo Radiative Transfer method works well for 1D simulations,
it is particularly inefficient for general 3D geometries of the stars and dust. Several
optimization techniques are developed to eliminate these inefficiencies: continuous
absorption method, forced scattering, peel-off technique and polychromatic photon packages
[Steinacker et al., 2013]. They are all summarized in figure 1.2 and will be further ex-
plained in this subsection. Notice that the absorption-scattering split method of the pho-
ton packages, explained in subsection 1.2.1 where the nature of the interaction was
12
Figure 1.2: Different optimization techniques in the Monte Carlo Radiative Transfer method.
The red cells show the continuous absorption of the photons. The pink arrows represent the
forced scattering technique. The light blue and orange arrows represent the peel-off techniques.
[Steinacker et al., 2013]
treated, is an optimization technique as well, as it tries to maximize the functionality
of one photon package.
Continuous absorption
The Monte Carlo Radiative Transfer method can be enhanced by absorbing along the
path of the photon package, instead of absorbing only at the interaction site. In this
manner, there will be less noise created by the simulations while using the same amount
of packages. This continuous absorption occurs along the entire path, such that the
photon is split into N+2 different parts, with N the amount of dust cells along the
photons path:
Wesc = e−τpath , (1.7)
Wsca = a(1e−τpath ) , (1.8)
and
Wabs,n = (1 − a)(e−τn−1 − e−τn
) (1.9)
One part Wesc will leave the system, one part Wsca is scattered at the interaction location
determined in section 1.2.1, and N parts Wabs,n are absorbed in the nth cell. In the
above equations, τn gives the optical depth measured from the photons location to the
surface of the nth cell. The parameter a is the dust grain albedo, this is the fraction of
the photon that is scattered during an interaction process. Note that the part that will
13
be scattered is the actual part of the photon package that survives and continues in the
life cycle.
The strength of the continuous absorption optimization method lies in the fact that
all photon packages contribute to the calculation of the absorption rate of each cell
they pass through. This is particularly useful for systems which are optically thin, as
otherwise they would have very few absorptions in the simple MC approach.
Forced scattering
In the SKIRT program a concept of forced scattering is used such that every ray is
forced to contribute to the scattered flux. Otherwise, in a system which would be
optically thin, most photon packages would leave the system without any interaction
and are hence wasted. In the forced scattering method, the exponential distribution
p(τλ) given by eq. 1.5 is replaced by the adjusted distribution q(τλ), which is assigned
with a weight Wf s = p(τλ)
q(τλ)
= 1 − e−τλ,path .
q(τλ)dτλ =



e−τλ dτλ
1−e
−τλ,path
(τλ < τλ,path)
0 (τλ > τλ,path)
(1.10)
In this equation, the cut off implies that when a random optical depth τλ larger than
τλ,path is generated, the exponential distribution q(τλ)dτλ = 0. In other words, when
an interaction point outside the system is generated, the probability of scattering at
that location is zero. This method forces the simulation to generate a random optical
depth τλ that is smaller than τλ,path, and hence an interaction site that is smaller than
the system itself.
Peel-off technique
Simple MCRT is completely inefficient in building up observable images/SEDs, as
only the photons that are emitted from the system in the direction of the observer
contribute to the output appearance. This flaw can be eliminated if we require that all
photons directly contribute to the output images, by creating peel-off photon packages
after every emission or scattering. The peel-off photon packages contain a portion of
the luminosity of the original photons, which emerge into the direction of the detection
instruments. This luminosity fraction is estimated by defining the weight factor of a
photon in the direction of the observer:
Wppp = p(nobs)e−τobs (1.11)
14
where τobs is the optical depth from the position of the emission or scattering event and
p(nobs) is the probability that the photon will be directed toward the observer. When
gathering all the information of the peel-off photons, the observed results of a real CCD
camera can be mimicked, images and SEDs of the models can be created.
Polychromatic photon packages
The last but most crucial optimization technique for this report is the usage of poly-
chromatic photon packages within the panchromatic simulations. In this case, photon
packages are emitted which contain photons at many different wavelengths simultane-
ously, instead of only one wavelength independently. This technique is still controver-
sial as it is not clear that it actually will reproduce accurate results. The main concern of
this project is to implement the polychromatic photon packages in the SKIRT program
and investigate the pros and cons of the technique. One of the expected benefits is that
the usage of these polychromatic photons will speed up the simulations. The MC run
will be simultaneously solved at all wavelengths, instead of a run for each wavelength.
Another expected benefit is the reduced noise in the color images. If monochromatic
photon packages are used, it could be the fact that a certain arbitrary pixel in the RGB
image is blue while its neighboring pixel is red. However, when polychromatic pho-
ton packages are used, every photon calculated can contribute to the output images
at all wavelengths instead of a single wavelength. In this way, a certain pixel in the
image will have a contribution of several colors. A detailed explanation how the con-
cept of polychromatic photon packages precisely works is given in the next section,
where-after the implementation and results will be discussed. The main focus will be
the accuracy of the simulations using polychromatic photon packages. This precision
will be investigated by comparing the adapted simulation with the original one and
by statistically calculate the noise and dispersion on it.
15
1.3 Polychromatism
1.3.1 Polychromatism in SUNRISE
The optimization method that makes use of polychromatic photon packages explained
in subsection 1.2.2 has already been implemented and tested before in a free Monte
Carlo dust radiative transfer code called SUNRISE [Jonsson, 2006]. In the article written
about this concept, P. Jonsson insures that the use of polychromatic photon packages
significantly improves the calculations in efficiency and accuracy for spectral features.
One of the results he obtained is shown in figure 1.3, where the difference between the
results using SUNRISE and the RADICAL [Dullemond and Turolla, 2000] code are plotted
as function of wavelength for four different optical depths. It is shown that his results
agree well for small optical depths, which means that he obtained accurate results for
optically thin systems. However, for larger optical depths, and especially for the edge-
on configurations, the relative differences reach ±40%, which is larger than the internal
differences between the codes. Stratifying the calculations into two wavelength ranges
instead of one solves this problem of high discrepancies, making the results agree very
well with the monochromatic results.
To investigate another advantage of polychromatism, P. Jonsson tried to quantify the
relative efficiencies of two methods by defining the efficiency as
=
F2
λ
Tσ2
Fλ
,
where T was the CPU time required to complete the calculations and σFλ
the Monte
Carlo sampling uncertainty in the SED. Hence, the efficiency quantifies the inverse of
the CPU time necessary to produce results of unit relative accuracy, independent to the
number of rays traced. The results of the efficiency of the monochromatic and poly-
chromatic methods he obtained in function of wavelength are shown in figure 1.4. It
seemed that for low optical depths, i.e. τν <10, the efficiency of the polychromatic al-
gorithm (solid lines) exceeds that of the monochromatic one (dashed lines). For τν=10
the efficiency of the monochromatic method overtakes the polychromatic one at long
wavelengths and for τν=100, i.e. a very high optical depth, this becomes the case for
shorter wavelengths as well. When P. Jonsson uses stratified polychromatic calcula-
tions, an efficiency greater than the monochromatic calculations are obtained.
16
Figure 1.3: Difference in SED between results from SUNRISE and RADICAL. The results of the
polychromatic algorithm are indistinguishable from those obtained with the monochromatic
algorithm. For the optically thick case stratified polychromatic calculations are used to avoid
the problem of diverging results.[Jonsson, 2006]
In this thesis, we will investigate if similar benefits as obtained in SUNRISE can be
achieved when this optimization technique is implemented in SKIRT. Our main goal
is to look at the obtained accuracy of the calculations when the polychromatic algo-
rithm is used. As the code will probably not be implemented on the most efficient
way, it will not be possible to compare the speeds of the codes based on the monochro-
matic and polychromatic method. Hence, we won’t be focusing on the efficiency of the
code, making it impossible to investigate this expected advantage of the optimization
technique.
17
Figure 1.4: Efficiencies of the polychromatic (solid lines) and monochromatic (dashed lines)
method. The stratified polychromatic calculation is shown as a dot-dashed line. For lower
optical depths, the efficiency of the polychromatic algorithm exceeds that of the monochro-
matic one for all wavelengths. For high optical depths, the stratified calculations are needed to
maintain a lower efficiency than in the monochromatic case.[Jonsson, 2006]
1.3.2 Polychromatism in SKIRT
In the monochromatic case, every wavelength is treated independently, which is pos-
sible because scattering by dust grains is an elastic process. Here we introduce and
implement a polychromatic algorithm within the panchromatic simulation of SKIRT.
As in this case every ray samples every wavelength, the Monte Carlo Radiative Trans-
fer calculations will be solved simultaneously at all wavelengths. The photon packages
will contain a list of luminosities, each corresponding with a certain wavelength. To
avoid unnecessary calculations, a minimum and maximum wavelength are identified
as properties of the photon package, where above and below the given luminosities are
zero. A reference wavelength is indicated as a characteristic of the photon packages for
the use in the simulation of the propagation and scattering processes. An experimental
guess for this reference wavelength lies halfway the optical range, at about 0.55 micron,
as the stellar emission of photon packages mostly cover the optical range. This value
at the center of the V-band range is a good guess for the stellar emission phase, but
whenever the thermal dust emission is taken into account, this reference wavelength
should be set halfway the IR wavelengths, as photons emitted from dust grains mostly
cover this range.
The polychromatic photon packages in SKIRT now include 9 characteristics:
• A luminosity vector, containing every wavelength dependent luminosity Lλ, or-
dered from the lowest to the highest wavelength contained in the simulation.
From this vector, the total luminosity Ltot of the package can be estimated.
18
• A constant Nlambda, referring to the amount of wavelength bins included in the
simulation.
• The minimum wavelength index, for which lower wavelengths contain lumi-
nosities equal to zero.
• The maximum wavelength index, for which higher wavelengths contain lumi-
nosities equal to zero.
• The reference wavelength index, from which the random interaction optical
depth and random scattering phase function will be drawn. This will be set to a
value of 0.55 µm in the stellar emission phase, e.g. halfway the optical range.
• The position x of the photon package, which refers to a location in the system
where the last process took place.
• The propagation direction k of the photon package.
• A flag returning true or false, indicating its origin, i.e. if the package is emitted
by a star or by a dust grain.
• A counter indicating the number of scattering events that the photon package
has already experienced.
Note that the selected range of wavelengths will be divided in bins, such that every bin
represents a certain wavelength group in the simulation and is treated as one wave-
length in the calculations. Increasing the required number of bins will hence increase
the accuracy of the calculations.
The polychromatic photon packages are relevant only within the panchromatic mod-
eling. A panchromatic simulation constructs a 3D model for the stars and dust from
which it can reproduce the observed images and SEDs over the entire electromag-
netic spectrum. The simulations usually span over the UV and sub-millimeter wave-
length bands, including absorption, scattering and thermal emission by dust grains
[Camps, 2013]. Hence, the implementation of the polychromatic photon packages
should be within the PanMonteCarloSimulation class. SKIRT also offers another type of
Monte Carlo simulation that operates at only one or more distinct wavelengths rather
than a discretized range. Such an oligochromatic simulation does not include thermal
dust emission. Necessary adjustments in sub- or super classes commonly used in both
the panchromatic and oligochromatic simulation will thus not work for the latter, e.i.
the OligoMonteCarloSimulation class.
19
1.3.3 Problem and solution
Using a reference wavelength for each photon package is rather a tricky technique, as
it can induce a big problem. The probability distributions, from which the interaction
optical depths -or thus the path lengths- and the scattering directions of the photon
packages are sampled, are wavelength dependent. For example, rays of shorter wave-
length will tend to travel shorter distances before interacting, as the dust opacity in-
creases towards these wavelengths. Thus, two important astrophysical quantities used
in the propagation and scattering of the photon package, namely the interaction optical
depth τλ and the scattering phase function Φs(θ, λ), depend on wavelength but will be
sampled only for the reference wavelength. These probability distributions will conse-
quently only be precisely correct for this reference wavelength, while they will deviate
from the exact values at the other wavelengths.
The ability to use biased distributions will make an attempt to compensate for this
problem. The intensity of the ray, corresponding with a certain wavelength, will be
altered at the point of interaction by use of a weight factor wλ. This biasing factor is
shown for the forced scattering case in eq. 1.12, where eq. 1.10 is used to calculate the
quotient of the probability distributions.
wλ =
q(τλ)
q(τre f )
= eτre f −τλ
τλ
τre f
1 − e−τre f,path
1 − e−τλ,path
(1.12)
This biasing factor is of course equal to unity for the reference wavelength. Note that
τλ,path is the total optical depth for a given wavelength λ from the point of emission
to the edge of the medium in the direction of propagation. As a part of the ray will
interact somewhere along the path, the optical depth of the interaction τλ is randomly
drawn in the range [0, τλ,path].
The probability of scattering in a certain direction is given by the scattering phase func-
tion Φs(θ). In the polychromatic simulation, the scattering angle θ will be drawn from
the reference wavelength λre f . The biasing factor which should be used to multiply
with the ray intensity after scattering is then given by eq. 1.13.
wλ =
Φs(θ, λ)
Φs(θ, λre f )
(1.13)
Note that the errors for a fixed number of rays will probably increase for wavelengths
where the dust opacity is very different from the one at the reference wavelength.
Wavelengths at the outermost regions will have biasing factors who deviate the most
from the reference weight factor, namely 1. This complication can make the biasing
20
factors very large, which can dominate the results at these particular wavelengths
[Jonsson, 2006]. A proper choice for the reference wavelength is hence crucial for the
accuracy of the method. This should be chosen experimentally, such that the range of
weighting factors encountered in the problem is minimized.
It is not always so easy to select an ideal reference wavelength that won’t be influenced
by the above mentioned problem. In some models, for example very optically thick
ones, the value of the weight factor in eq. 1.12 will increase without bound. A possible
solution is to consider partly polychromatic photon packages, which contain a smaller
range of wavelengths and thus not the whole electromagnetic spectrum. This can be
done by calculating the biasing factors before the propagation and scattering processes
are done. When these weight factors deviate to much from unity - the magnitude of
wλ,re f - the photon package can be split into two or more packages, containing a part of
the luminosity vector of the original one. This method will be referred to as the photon
splitting technique. New reference wavelengths will be calculated for these split lumi-
nosity packages, lying between there minimal and maximal wavelengths.
The difficulty lies now when to decide whether the biasing factor is too high. The
higher these factors become, the less accurate the calculations will be. On the other
hand, setting a strict limit to these factors will result in a great amount of photons split,
which will slow down the calculations. This is why a certain deviation from unity
should be chosen, which gives a good balance between the speed of the calculations
and the accuracies they reproduce.
21
1.4 Overview of the symbols
The symbols that are used in the previous sections for the description of the radiative
transfer algorithm and that will return in the following sections are briefly summa-
rized in the table below. Mark that, except for the position, propagation direction and
biasing limit, there is a clear dependence on wavelength in the rest of these quantities.
Symbol Description
x Photon package’s position of the last emission or interaction.
k Propagation direction of the photon package from a certain position x.
κ Dust opacity or interaction coefficient per unit mass.
a Dust grain albedo (the fraction of the photon that is scattered during
an interaction).
τλ,path Total optical depth of a photons path for a wavelength λ: from the point
of emission to the edge of the medium.
τλ Randomly drawn interaction optical depth at wavelength λ.
Lλ Luminosity of the ray at a wavelength λ.
Ltot The total luminosity (for all wavelengths) of a photon or system.
wλ Biasing factor used to alter the luminosity Lλ when the position of
interaction or the scattering angle is drawn at the wrong wavelength λre f
bdev Limit that will be set to retrain the magnitudes of the biasing factors.
Ftot,λ Total flux of the system at wavelength λ.
Ftrans,λ Total flux of the system at wavelength λ without consideration of dust.
Fdir,λ Flux of the system at wavelength λ reaching the detectors directly
and influenced by extinction, but not scattering.
Fsca,λ Scattered flux of the system at wavelength λ.
Fdust,λ Flux of the system emitted by the dust at wavelength λ.
22
Implementation of the polychromatic photon packages
2
2.1 The different stages
In this section, a brief explanation of the different stages that follow each other in the
panchromatic simulation will be itemized. In this way, a clear view can be created
about the construction within the SKIRT code.
SKIRT actually starts the panchromatic simulation by activating the stellar emission
phase. This stage, in the code run by the runstellaremission function, starts and sim-
ulates the life cycle of every photon package. In particular, the function gives birth to
all of the luminosity packages and, after the radiation emerging towards the detection
instruments has been estimated, begins its propagation trough the dust. The photon’s
life cycle is schematically explained in figure 2.1. How the birth and initialization of
the photons is established will be the topic of section 2.2. There, an unrealistic model
will be put together which doesn’t contain dust in its system.
As soon as the panchromatic simulation includes a dust system, which usually is the
case in all types of stellar objects, dust absorption and scattering will be automatically
enabled. The stellar emission phase will simulate the propagation through this dust by
entering a cycle consisting of five different steps. These steps will be explained in more
detail in section 2.3.
When all photons terminate their life cycles and a dusty medium is present in the
system, the next stage of the simulation is activated. This is the so called dust self-
absorption phase. It will simulate the continuous emission and re-absorption of photon
packages through the dust grains. Because this particular phase of the simulation,
described in the rundustselfabsorption function, is only possible when dust emission
is taken into account, it will be treated in one of the last parts of this thesis (section
2.5).
The final phase of a panchromatic simulation is the dust emission phase. This will be
explained in section 2.4 and is driven by the rundustemission function. In this stage,
the absorbed luminosity of the entire simulation, with or without dust self-absorption,
is estimated in such a way that the dust can begin its thermal emission. The photons
launch
Peeloffemission
fillDustSystemPath
simulateescapeandabsorption
is Ltot > Lmin?
simulatepropagation
peeloffscattering
simulatescattering
yes
no
Terminate
life cycle
Figure 2.1: The life cycle of an individual photon emitted by a stellar object. This figure shows
schematically the successive functions called in the stellar emission phase. The same order of
functions (except for calling the launch function) and terminating condition will be used in
the dust self-absorption and dust emission phases. Mark that in the dust self-absorption phase no
peel-off photons are created.
which underwent interstellar reddening will leave the system and peel-off packages
are created. Note that if the simulation covers UV and optical bands only, dust emis-
sion will most likely be irrelevant.
Parallelization
The Monochromatic Radiative Transfer code in SKIRT, i.e. every stage explained above,
is currently parallelized using multiprocessor threading. This type of programming al-
lows multiple threads to exist within a single process. The threads will share the same
process’ memory, but are able to perform calculations independently of each other
[Quinn, 2004]. This type of shared memory parallelization is called task or control par-
allelism and will split the different levels of iterations in the MC run over different
nodes. In such way, the SKIRT program can simulate the life cycles of different photon
packages simultaneously, by distributing them over all available threads. This tech-
nique significantly improves the speed of the calculations and is therefor extremely
useful for a very CPU time demanding simulation. Note that in the near future, one
of the planned developments of SKIRT is to use MPI (Message Passing Interface) for
data parallelism, in order to allow Monte Carlo simulations to be run on distributed
memory systems [Baes et al., 2011]. This should speed up the calculations even more
when simulating on supercomputers.
24
The main task parallelism of the original SKIRT version is situated in the loop over
wavelength. This implies that SKIRT runs both the stellar and dust emission phases at
different wavelengths simultaneously. When polychromatic photon packages are im-
plemented, the iterations in the stellar and dust emission phases will be over different
photon packages instead of different wavelengths. A quick adaptation must be made
in order to construct a loop over packages containing all wavelengths.
25
2.2 Stellar emission of the photon packages
As the photon packages begin their life cycles, they are born or launched from the stel-
lar system that is specified by the user. After being created, an optimization technique
is called for the first time in which another, peeled-off photon is stripped off the origi-
nal photon packages. These two procedures will be called only once for every photon
package.
launching of the photon packages
This particular feature is done by the launch function, who initializes all the proper-
ties of a newborn photon package. There are essentially four stellar systems where
the launching function is implemented, here itemized as their classes are called in the
code:
• AdaptiveMeshStellarSystem: this class represents stellar systems defined by the
stellar density and properties, such as the metallicity Z of the stellar population
or there age, imported from an adaptive mesh data file.
• CompStellarSystem: this class represents stellar systems that are the superposi-
tion of a number of components. The individual components are defined inter-
nally as objects of the StellarComp class.
• SPHStellarSystem: this class represents stellar systems defined from a set of SPH
(Smoothed Particle Hydrodynamics) star particles, such as for example resulting
from a cosmological simulation. The information on the SPH star particles is read
from a file.
• VoronoiStellarSystem: this class represents stellar systems defined by the stellar
density and properties, such as the metallicity Z or the age of the stellar popula-
tion, imported from a Voronoi mesh data file.
In general, all these stellar systems feature approximately the same launching function
to set up a photon package consisting of its 9 initial characteristics described in sec-
tion 1.3.2. The flag that returns the origin of the photon is set true, as it is born and
emitted from the stellar object itself. As explained in section 1.2.1, the initial position
and direction of the photon are generated randomly. An ordered table of floating val-
ues, called the normalized cumulative total luminosity vector, must be created. This
is for initializing the photons position, as this vector is used to identify in which cell
or in which stellar object the emission takes place. Calculating the total luminosity
of each cell or object, the normalized cumulative total luminosity vector will have the
26
following form:
Xtot = 0,
Ltot,1
Ltot
,
Ltot,1 + Ltot,2
Ltot
, ..., 1
Here, the total luminosities Ltot,i over each wavelength range belong to every cell or
object independently, while Ltot is the total luminosity of the entire system. A particu-
lar cell or object can refer to different aspects, depending on the stellar system that is
used in the simulation. In the case of an Adaptive Mesh or Voronoi stellar system, these
cells are in fact Mesh cells. A random Mesh cell will be determined from the above
normalized cumulative total luminosity matrix, allowing us to determine a random
position within this cell’s boundaries. When using a SPH stellar system, the i’th ob-
ject represents the i’th SPH particle. Once a random SPH particle has been chosen, a
position is determined randomly from the smoothed distribution around the particle’s
center. Finally, the objects can also refer to different stellar components. In this case,
the position of emission is determined randomly from the geometry of the i’th stellar
component.
A luminosity vector for the photon package is created within this launch function,
determined by:
Lλ = Lλ,i
Ltot
Ltot,i
1
Npp
Hence, when a photon package is located at a position x within the i’th cell/object,
its luminosity at wavelength λ is equal to the luminosity of the cell/object at that par-
ticular wavelength, normalized with respect to the total luminosity of the system (the
total luminosity of all cells or stellar objects) by the factor Ltot
Ltot,i
and divided by the
total amount of photon packages. Because of this, the minimum and maximum wave-
lengths, below and above which now luminosities are found, are equal to those de-
termined by the i’th cell/object. The value of the reference wavelength is not yet of
importance, but will be at later stages of the simulation. For ease, this property is set
to the value closest to the wavelength bin containing 0.55 µm.
Creating peel-off photon packages
After the stellar photon packages are emitted by the adapted launch function, the
peeloffemission is called to estimate the emerging radiation in the direction of the
detection instruments. One peel-off photon package is created for each instrument,
having 9 equivalent characteristics of the original polychromatic photon package. Only
one property, the propagation direction, is altered into the direction of the observers.
As the stellar emission is considered to be isotropic, no extra weight factor should be
used to compensate for the luminosities emerging in that particular direction. In other
words, the probability that the photon would have been emitted towards the detection
27
instruments in eq. 1.11 is equivalent with the probability that it is emitted in any other
direction. The instruments detect one wavelength at a time (just as in the monochro-
matic case), such that a loop over the wavelength indexes should be added in the code
of the peel-off technique. Mark that this loop will only cover the range of wavelengths
containing luminosities different from zero, indicated by the peel-off photons mini-
mum and maximum wavelength indexes. This is useful as it excludes unnecessary
calculations.
28
2.3 Propagation through the dust
A second step in the implementation is to add a dusty medium to the simulation of a
stellar object, which will cause scattering and extinction of the emitted photon pack-
ages when they propagate trough this medium. The inclusion of dust can be charged
by the user when setting up the ski-file. The life cycle of a photon package in a dust
system after it has been launched by stellar emission is summarized in the box below.
This five procedures will be iterated until the polychromatic photon package vanishes,
disposing of a total luminosity that has decreased below its critical value.
fillDustSystemPath
simulateescapeandabsorption
simulatepropagation
peeloffscattering
simulatescattering
Filling the system with dust
The first aspect that must happen after the emission of the photon package, is that
its path must be filled up with dust. This is being done in the fillDustSystemPath
function, by calculating the information, in particular the optical depth, on the path
of the photon package through the system of dust and storing it in a so called Dust-
SystemPath object. This calculation is done for every photon package and will still be
performed for every wavelength independently, lying in the range of the minimal and
maximal wavelength indexes indicated by the package. This means that only a small
loop that runs over the wavelengths of the photons must be added in the fillDust-
SystemPath function.
Simulating the escape and absorption of the package
Next, the optimization technique called continuous absorption, demonstrated in figure
1.2, is performed by the function simulateescapeandabsorption. This feature will
simulate the escape from the system and the absorption by the dust of a fraction of
the wavelength dependent luminosity of a photon package. It actually splits this lu-
minosity Lλ in N+2 different parts, as explained in section 1.2.2, with N the number of
dust cells along the photons path. The part that scatters is the actual part of the photon
package that survives and continues in the photon package’s life cycle. As before, this
29
calculation will still be performed for every wavelength. Thus, a small loop ranging
over the minimal and maximal wavelength of every photon package is inserted in the
function. At this stage of the simulation only single dust components are treated, as
multiple dust components will be too complicated.
Simulating the propagation through the dust
If the total luminosity Ltot of the photon package after the continuous absorption drops
below a certain minimum value, its life cycle is ended and this luminosity package
won’t be taken into account anymore. When in contrary it doesn’t reach the critical lu-
minosity value, the scattered part of the photon package will subsequently propagate
trough the dusty medium due to the simulatepropagation function. This function
will determine the next scattering location of a photon package, as explained in section
1.2.1, and then simulate the propagation to this position. Here, the problem mentioned
in section 1.3.3 arises, as a random interaction optical depth must be sampled from the
PDF given in function 1.10, which only will be done at the reference wavelength.
All luminosities Lλ of the photon package will in this case have to propagate to a ran-
dom interaction point, determined by the value of the optical depth of the photons
path τre f,path at the reference wavelength. This is only correct for the luminosity at the
reference wavelength Lλre f . To compensate for this miscalculation, the intensity of the
ray at different wavelengths at the point of interaction should be multiplied with the
previous explained biasing factors wλ. Recall eq. 1.12 for this factors:
wλ = eτre f −τλ
τλ
τre f
1 − e−τre f,path
1 − e−τλ,path
This method is schematically displayed with a simplistic model in figure 2.2.
The biasing factors in equation 1.12 can be calculated in three steps. First, the optical
depth of the photons path τre f,path is retrieved from the DustSystemPath object. This
is only done at the reference wavelength λre f , such that the random optical depth τre f
can be drawn in the range [0, τre f,path].
A following step is to create a loop over all wavelengths of interest of the package,
ranging from the minimum wavelength index to the maximum index. In this loop,
τλ,path is extracted as done before. The only difference for the wavelengths other than
the reference wavelength is that τλ won’t be drawn randomly. When looking at eq. 1.4
for an arbitrary wavelength, it is possible to divide this by the same equation for the
30
Lλ1
... Lλre f ... LλN
λ1 ... λre f ... λN
c c c
×wλ1 ×wλre f = 1 ×wλN
Lλ1
... Lλre f
... LλN
Figure 2.2: Demonstration of the method of compensation by use of biasing factors. These
factors are wavelength dependent and must be calculated using eq. 1.12. Note that at the
reference wavelength the weight factor equals unity.
reference wavelength to get:
τλ
τre f
=
s
0 κλρ(s )ds
s
0 κre f ρ(s )ds
=
κλ
κre f
.
This can also be done for τλ,path and τre f,path, which contain the same integral but with
the path length s going to infinity. This gives the same result as in the right hand side of
the above equation. The optical depth τλ can thus be estimated using the fact that
τλ
τre f
=
τλ,path
τre f,path
.
Here, it is clear that multiple dust components will result in too complicated calcula-
tions, as the different mass densities should be added. Eventually we can calculate the
biasing factors wλ for every wavelength, using the optical depths calculated before and
using eq. 1.12.
Simulating peel-off photon packages
After the propagation of the photon package to a certain point in the system, peel-
off photon packages are created by the peeloffscattering function. This is also one
of the optimization techniques discussed in section 1.2.2, where figure 1.2 shows the
working of this method. Mark that this function is called just before a scattering event
is simulated.
In this calculation, a peel-off photon package will be created for every instrument in
the instrument system and will be forced to propagate in the direction of the observers.
This peel-off package will also contain a list of luminosities, from which every wave-
length dependent luminosity Lλ will be detected independently. To compensate for
31
the change in propagation direction, Lλ must be altered by multiplying it with an ad-
ditional weight factor given by eq. 1.11. This factor, which represents the probability
that a photon package would be scattered in the direction kobs if its original propaga-
tion direction was k, is wavelength dependent. Because of this, the detection of the
polychromatic peel-off photon packages will still be performed for every wavelength
independently, such as in the monochromatic case.
Simulating the scattering event
As a final aspect of the photon package’s life cycle, it will undergo scattering by the
dust grains. This procedure is accomplished by the simulatescattering function,
which of course increases the number of scattering events experienced by the photon
package by one. Secondly, the function will generate a new random propagation di-
rection, which will be sampled from the scattering phase function Φs(θ, λre f ), remem-
bering that only one dust component is assumed. As the new propagation direction
is drawn at the reference wavelength, biasing factors must be calculated for the other
wavelengths. Recall the formula for this factors, given in eq. 1.13:
wλ =
Φs(θ, λ)
Φs(θ, λre f )
The concept of this weight factors is just as before, i.e. the intensity of the ray at the
different wavelengths must be multiplied with the computed values of the biasing fac-
tors in a way demonstrated in figure 2.2.
When the photon finishes this last process, it has completed one cycle of his life. Subse-
quently, it will start a following cycle by filling the newly generated path of the photon
package with dust. All the previous processes will be run again, until the package
reaches a total luminosity Ltot lower than Lmin. When this is the case, the photon pack-
age dies and the life cycle is terminated.
32
2.4 Thermal dust emission
In the preceding sections the thermal emission by the dust grains has not been consid-
ered. Only a simple adaptation in the ski file must be made to take this feature into
account. As explained in section 2.1, the rundustemission function will be responsible
for this feature.
Before the life cycle of the photon packages can be started, the total luminosity that
is emitted from every dust cell must be determined. Originally, for a monochromatic
photon package at wavelength λ and a dust cell m this is just the product of the to-
tal luminosity absorbed in that cell, Labs
m , and the normalized SED at the particular
wavelength λ corresponding to that cell, as obtained from the dust emission library.
The same will be done in the adjusted program, such that only a loop over the wave-
lengths should be implemented to construct the luminosity vector of the polychromatic
photons, determined by Labs
m and the normalized dust SED at every wavelength cor-
responding to cell m. The reference wavelength will this time be set somewhere be-
tween the IR and submm range, as thermal dust emission predominantly covers that
wavelength range. It will be determined by the center of the wavelengths the emitted
photons cover.
A vector Xtot is created that describes the normalized cumulative total luminosity dis-
tribution, containing the following components:
Xtot,m =
m
∑
m =0
Ltot,m
Ltot
(2.1)
This vector is used to generate random dust cells from which photon packages can
be launched. The dust emission can start afterwards by launching Npp polychromatic
photon packages. The original positions of this photons will be chosen as a random
position in the dust cell m, in his turn chosen randomly from the cumulative luminos-
ity distribution Xtot.
As the reddened photon packages are emitted, their life cycle trough the dust can be-
gin, just as explained in section 2.3, by making use of the five specific functions: fill-
DustSystemPath, simulateescapeandabsorption, simulatepropagation, peeloffscat-
tering and simulatescattering. Note that no further adaptations must me made in
these functions.
33
2.5 The dust self-absorption
The reddened photon packages, which are created by thermal dust emission after they
where absorbed, can undergo a similar process by the rest of the dusty medium. This
process is called dust self-absorption and is a method to calculate the internal equilib-
rium temperature of the dust. As the thermal emission of dust ranges over the infrared
and sub-millimeter spectrum, it should be these wavelengths which are re-absorbed,
however interstellar dust is somewhat less efficient in absorbing at these ranges.
The relevance of the dust self-absorption process can depend on two aspects: the op-
tical depth of the system and the temperature of the dust. If the system contains a
modest amount of dust, having a low optical depth throughout the medium, it will be
transparent to long-wavelength radiation. When the system contains a considerable
amount of dust, making it opaque, this process will become important as mid- and
far-infrared radiation will be more likely to be absorbed. Secondly, the dust tempera-
ture affects the likelihood of absorption as well. An increasing temperature of the dust
can shift the dust SED peak to shorter wavelengths, causing it to enter the NIR and
optical ranges. Dust is more efficient in absorbing these wavelengths, making the dust
self-absorption process more important in these cases.
The dust self-absorption function consist of an outer loop which in a continuous man-
ner absorbs and re-emits the radiation several times. This outer loop, and hence also
the function itself, terminates when either the maximum number of dust self-absorption
iterations has been reached, which is set to a value of 100, or when the total luminosity
absorbed by the dust is stable, i.e. when it does not change by more than 1% compared
to the previous cycle.
Before the life cycle of the emitted polychromatic photon packages is started, a similar
process as in section 2.4 must be considered. This is determining the total luminosity
Ltot,m absorbed in every dust cell, that hence will be emitted. Again a normalized cu-
mulative total luminosity distribution as in eq. 2.1 is created, used to determine the
random generated dust cells from which the photons will be emitted.
When the life cycle within this process starts, the peel-off technique (i.e. the function
peeloffscattering) should not be considered, as the dust self-absorption phase is a
technique for computing the internal equilibrium of the dust grains. As the emergent
radiation field doesn’t have to be estimated, the life cycle of a photon package within
this phase is somewhat more simplistic than before.
When this function is terminated, the function explained in section 2.4 which calcu-
lates the last emission of the dust and afterwards estimates the radiation emerging in
the direction of the instruments is called.
34
2.6 Overview
In the preceding sections the different stages within the panchromatic simulation in
SKIRT were explained in detail. We described all the adaptations in the code that were
necessary to implement the polychromatic algorithm. These implementations were
done in different classes, from which the most important ones were the PhotonPackage
class, the MonteCarloSimulation class and the PanMonteCarloSimulation class. Note
that the necessary adjustments in sub- or super classes commonly used in both the
panchromatic and oligochromatic simulation of SKIRT will not work for the latter.
The most difficult part was the implementation of the biasing factors (eq. 1.12 and
1.13) that were needed to compensate for the incorrect sampling of the interaction op-
tical depth and scattering angle at wavelengths other than the reference wavelength.
This was explained in section 2.3.
It is now possible to investigate the results obtained by the polychromatic algorithm.
This will be done in the next chapter by comparing the original SKIRT code with the
adapted one and verifying their similarities and differences. The out coming results
will mostly be tested for accuracy.
35
36
Findings
3
3.1 Stellar model without dust: bi-Plummer model
For investigating the first step of the implementation, it is recommended to choose a
very simplistic galaxy model. Here, a spherical stellar model is used, called a Plummer
galaxy, which in a relatively good way can represent elliptical galaxies. We create such
a simple ski file consisting of two unequal Plummer model stellar systems, each con-
taining a black body spectral energy distribution (SED). The first Plummer model has
a black body temperature of 104 K, while the other galaxy is one order colder, having
a temperature of 103 K. The peak of the spectrum of the latter will hence be visible at
longer wavelengths than the first. For running the simulation, 107 photon packages
are launched within the wavelength range of 0.1 − 10 µm, divided into 21 wavelength
grid points. The more wavelength bins are used to calculate the stellar emission, the
more accurate the results will be. Only wavelengths within the optical range are taken
into account, since there is no emission of the dust grains nor a dust system present in
this model.
This simple ski model, without any dust components, is an ideal way to check the
validity of the implementation of the launch function, i.e. the function that creates the
photon packages. As no dusty medium is present, the photon packages won’t undergo
any interaction when propagating towards the observers. They will only be emitted by
the stellar systems, where-after the peel-off technique is used to estimate the radiation
which emerge in the directions of the detectors. For now, the setting of the reference
wavelength won’t be of importance, as this value is not used in the launch function or
the peeloffemission function. On the other hand, this will become significant when-
ever a dust system is present in the model.
When looking at the SED files, we can distinguish five different fluxes spanning over
multiple wavelength bands.
• Total flux: the total flux detected by the instruments. It contains the direct, scat-
tered and dust flux.
• Direct flux: the flux resulting from the stellar photon packages which directly
reach the instruments.
• Scattered flux: the flux resulting from stellar photon packages that where scat-
tered by the dust before they reached the instruments. As no dust is included in
this model, this flux component will be zero.
• Dust flux: the flux resulting from photon packages that were emitted by the dust.
As no dust is included in this model, this flux component will be zero as well.
• Transparent flux: flux that would be detected by the instruments if there were no
dust in the system. In this model, the transparent flux will be equal to the direct
flux.
In figure 3.1 the total fluxes of the original and adjusted SKIRT codes are compared to
each other by plotting their SEDs on one graph. You can clearly distinguish the stellar
emission of both Plummer models, as they result in two separated black body curves.
The hottest stellar object is visible at shorter wavelengths, between 0.1 µm and 1 µm,
while the coldest is visible at the longer wavelengths, between 1 µm and 10 µm. It
is clear that both curves, showing the fluxes of the monochromatic and polychromatic
emitted photons, coincide very well, with an almost negligible relative difference rang-
ing over 0.016%. As a reference, a dashed line indicating the zero percent is drawn
on the graph. One can say that both simulations are in good agreement, hence the
emission of the polychromatic photon packages from the stellar objects are performed
correctly within the adapted panchromatic simulation.
38
Figure 3.1: Top: Comparison of the SED plots for the total flux of the monochromatic (red line)
and polychromatic photon packages (green line). Mark that the green curve is covered by the
red curve because they coincide almost exactly. Bottom: The difference plot of the total fluxes
for the monochromatic and polychromatic photon packages. The differences are expressed in
percents.
39
3.2 Stellar model with dust component
The main purpose of the SKIRT program is of course studying how the dust affects the
simulated galaxies, consisting of emission sources and a dusty medium. Two different
important phenomena, i.e. the absorption and the scattering process, of the simulation
can be tested to investigate these dust effects. In order to test these two processes, a ski
file is created consisting of a stellar object with an exponential disk geometry, contain-
ing a disk of dust in the system with an exponential geometry. Note that only one dust
mixture should be inserted in the model, as the panchromatic calculations for the poly-
chromatic photons would otherwise be too complicated. The default value is used to
initiate the dust type, this is an average dust mixture that is appropriate for the typical
interstellar dust medium. The stellar SED type used for this system is extracted from
the Pegase library. A total amount of 106 photon packages are launched which cover
the spectrum over a wavelength range of 0.1 − 10 µm. This range is divided into 61
wavelength grid points.
Two detection instruments are set in this model to estimate the radiation it transmits:
one instruments looks at the system edge-on, having an inclination of 90 degrees, the
other looking at the spiral galaxy face-on, having a zero inclination. As discussed in
section 1.1, the edge-on view of the spiral model will be affected the most by the dust,
as this dust is located predominantly in the disk of the galaxy. This will also be illus-
trated in the results given in the following sections.
While testing the adaptations discussed in section 2.3, we can again check the correct
launching of the photon packages within this somewhat more difficult model. This
will be done in section 3.2.1 by looking at the transparent fluxes in the SEDs of the
simulations, hence the fluxes that result from the stellar photon packages without any
dust interaction. Afterwards, when the results of the polychromatic launching func-
tion is in good agreement with what it should be, the absorption by the dust can be
checked. This will be done in section 3.2.2 by examining the direct fluxes in the SEDs
of both simulations. These direct fluxes represent the emergent radiation (the stellar
emission) of the object, influenced by the absorption by the dust grains. Scattering of
the photon packages is not considered in this part, as this will be a third and final step
for testing the implementations. Inspecting the dust scattering will be done by looking
at the scattered fluxes in the SEDs. This is the most tricky part, as biasing is an impor-
tant factor in this stage of the simulation. Section 3.2.3 will show the testing results for
this part.
40
3.2.1 Launching of the photon packages
The transparent fluxes of both simulations are given in the SED in figure 3.2. This
SED of the spiral model is calculated for a detection instrument looking at the system
at an inclination of 90 degrees. The investigated transparent fluxes correspond to the
fluxes of the simulation without considering the dust interaction. They are hence per-
fect for estimating the correct performance of the stellar emission in the simulation of
this model. As demonstrated in section 3.1 for the bi-Plummer model, we expect again
that the monochromatic and polychromatic photon simulations show approximately
the same results, as is confirmed by figure 3.2. The relative difference, displayed in
the graph below the SED, even returns values equal to zero percent. This seems some-
what strange at first, but we have to mark the fact that the fluxes are stored as data
having a maximum number of 8 digits after the decimal point. The transparent fluxes
of the simulations hence will have such small errors that are not visible in the data.
We can conclude that the panchromatic simulation of this edge-on spiral model us-
ing polychromatic photon packages does indeed reproduce the same results as when
monochromatic photon packages are used.
Figure 3.2: Top: Comparison of the SED plots for the transparent flux of the monochromatic
(red line) and polychromatic photons (green line). The green curve is covered by the red curve
because they coincide almost exactly. Bottom: The difference plot of the transparent fluxes of
both simulations. The differences are expressed in percents.
41
3.2.2 Absorption of the photon packages
When examining the direct fluxes of the simulations, we can investigate if the dust
affects the stellar photon packages by absorption in the correct manner. The direct
flux Fdir represents the radiation received by the detectors, which propagated through
the dust without being scattered. Figure 3.3 reflects the SED of the edge-on spiral
galaxy for both simulations. Again, the red curve corresponds to the simulation using
monochromatic photon packages and the green curve to the simulation using poly-
chromatic photon packages. Both coincide very well, having a relative difference be-
tween −0.3% and +0.1%. This is still a more than adequate result, as the original sim-
ulation itself consists of errors resulting from the random seeds generated for every
processor. The internal differences of the code is called noise.
Figure 3.3: Top: Comparison of the SED plots for the direct flux of the monochromatic (red line)
and polychromatic photons (green line). The green curve is covered by the red curve because
they coincide almost exactly. Bottom: The difference plot of the direct fluxes of both simulations.
The differences are expressed in percents.
42
This so called noise of the Fdir resulting from the monochromatic photon packages, de-
tected by the instrument at an inclination of 90◦, is shown in figure 3.4 for a wavelength
range covering 0.4 − 3 µm, corresponding to the peak of the SED. To obtain this figure,
the simulation is run 10 times, containing 105 monochromatic photon packages. The
noise seems to cover a range of 2% (±1%). The black line in the figure represents the
mean noise ¯x of the simulations, calculated for every wavelength and having standard
deviations 1σλ indicated by the error bars. The standard deviation at every wavelength
is calculated as follows:
σλ =
1
N − 1
N
∑
i=1
(xi,λ − ¯xλ)2
(3.1)
Reducing the noise in these simulations can be done by increasing the amount of
monochromatic photon packages. When using for example 108 monochromatic pho-
ton packages, the noise can be reduced to almost zero.
The same can be done for estimating the noise produced by the polychromatic photon
packages in the simulation, displayed in figure 3.5. The relative difference for every
simulation, containing 105 polychromatic photon packages, is calculated with respect
to a reference simulation containing 108 monochromatic photon packages. As before,
the black curve represents the mean noise and the error bars indicate the standard de-
viation calculated by eq. 3.1. As you can see in figure 3.5, the simulations provide an
error ranging over 1.5%, which is approximately equivalent with the noise produced
by the original simulation in figure 3.4. It is of great importance that the relative dif-
ferences between the original and adapted simulations are always compared to this
noise, to obtain an estimation of how accurate both results are. You may have noticed
that the noise produced by the adapted SKIRT code in figure 3.5 is not so random as
we expect it to be. Instead, continues curves corresponding to each simulation with a
different random seed are produced. This is however not so peculiar, as every wave-
length dependent flux Fλ is influenced by the biasing factors used in the SKIRT code.
The magnitudes of this weight factors will form a continue curve in function of the
wavelength, crossing unity for the reference wavelength.
Using the ds9 software, it is possible to visualize this spiral galaxy in a two-dimensional
image, as it would look like in the plane of the sky. To obtain a nicer and more realistic
image, a bulge is added to the model. The spiral galaxy is now composed of a disk,
having an exponential disk geometry, and a bulge, which has a Sersic Geometry. Cal-
culating the image composed of the direct flux Fdir only, a result as given in the bottom
of figure 3.6 for the simulation that uses polychromatic photon packages is obtained.
The spiral galaxy is viewed from an inclination of 88◦. The different colors indicate
the magnitude of Fdir, indicated by the color bar below. We can clearly observe the
43
structure of the edge-on spiral system, due to the emission by the bulge and the disk of
the stellar object. A clear dust lane is visible in the image, which is caused by the dust
extinction in the disk of the galaxy. As the differences are to low to visualize, it is not
possible to distinct the obtained result with the image of the original simulation that
uses monochromatic photon packages, displayed in the top of figure 3.6. To make this
percentage difference clear, a residual frame of the two simulations has to be made,
shown in figure 3.7.
Figure 3.4: The noise resulting from the direct fluxes Fdir of the panchromatic simulation using
monochromatic photon packages. The error bars indicate 1σ deviations. The simulation is run
10 times, having different random seeds set in the ski-files and run over 20 threads.
Figure 3.5: The noise resulting from the direct fluxes Fdir of the panchromatic simulation using
polychromatic photon packages. The error bars indicate 1σ deviations. The simulation is run
10 times, having different random seeds set in the ski-files and run over 20 threads.
44
Figure 3.6: V-band images of the edge-on spiral galaxy produced by the panchromatic simula-
tion with the original code using 108 monochromatic photons (top) and the adapted code using
106 polychromatic photons (bottom). These images are the results of the direct fluxes Fdir only.
This residual frame is calculated as:
FrameRes =
|FrameRe f − Framesim|
|FrameRe f |
(3.2)
The frame obtained by the original simulation using 108 monochromatic photon pack-
ages is set as a reference frame Framere f . Hence, the pixels in the residual frame are
obtained out of the difference between the pixel values of the two frames, divided by
the pixel values of the reference frame. The color bars represent errors in 12.5 % levels.
All white pixels have an error of more than 87.5%, which may exceed 100%. Notice
that the outer edges of the edge-on spiral galaxy have the highest errors, due to the
detection of very small fluxes in these pixels. A pixels can for example detect only one
photon in one frame, arrived at that particular location by the various random pro-
cesses, while detecting none in the other frame. These errors at the edges can probably
be reduces by using more photon packages in both simulations. The disk has errors
ranging up to 37.5%. Another aspect that can be seen is for longer wavelengths (bot-
tom of the figure), where the disk contains more pixels with errors below the 12.5%.
On the other hand, a shorter wavelength (top of the figure) shows the pattern of a dust
lane within the disk, producing higher errors. This is the result of the high efficiency
of extinction at these wavelengths.
45
Figure 3.7: The residual frames of the V-band images resulting from the simulation using 106
polychromatic photon packages and the reference V-band image. From the top image to the
bottom the wavelength is increased from a short to longer wavelength bin. The middle frame
represents the residual frame in the center of the optical range (around 0.55 µm). The color
bar shows percentage differences in levels of 12.5%, ranging from 0% (black) to 100% or more
(white).
46
3.2.3 Scattering of the photon packages
Extinction of the photon packages by the dust does not only result from the physical
process of absorption, but from scattering as well. Here we want to check whether
this scattering process is performed correctly and discover how hard the use of biasing
factors affects the out coming results. The photons that will be scattered by the dust,
located predominantly in the disk of the galaxy, will most likely be detected by the face-
on instrument systems, i.e. the detectors that look at the galaxy from a zero inclination
angle. This is due to the absorption in the disc, that prevents photons from being
detected in the edge-on view. This is why the results in this section will always be
examined for the face-on spiral galaxy. The scattered fluxes Fsca detected by the face-on
instruments of both simulations are displayed in figure 3.8 and seem to coincide very
well. Notice that we have restricted the range of the scattered flux values of the SED
to 4 orders of magnitude. Looking at the relative difference plot, the error seems to be
very low for short wavelengths, i.e. for wavelengths below 7 µm the relative difference
lies between the −2% and +2%. The red part of the SED spectrum gives a result with
higher differences, which is not shown in the figure, ranging to approximately −16%.
These wavelengths however contain scattered fluxes having magnitude orders as low
as 10−13, for which very small errors can result in very high percentage differences. It
is therefor only useful to look at the highest fluxes of the SEDs, ranging over 4 orders
of magnitude.
We can again examine the noise resulting from the adapted simulation for the scattered
fluxes, detected by the face-on instrument, as this is important to estimate the accuracy
and correctness of the new SKIRT code. The percentage difference of every simulation,
containing a different random seed initiated in the ski-file, is calculated with respect to
a simulation containing 108 monochromatic photon packages, as this simulation pro-
duces a noise equal to zero due to its high precision. The results are displayed in figure
3.9. The noise covers a range of 2% (± 1%) within 1σ for wavelengths shorter than
1 µm, while it increases for longer wavelengths. We know that using a larger amount
of monochromatic photon packages in the original SKIRT code can reduce the noise
and increase the accuracy in the simulation as much as possible. This will be the case
in the adapted SKIRT code as well, however it will not fulfill, as the biasing factors
wλ that induce these large errors depend on parameters of the dust system, i.e. the
total dust mass. A possibility to reduce this noise is the use of partly polychromatic pho-
ton packages, i.e. to split the photon package into multiple photons containing smaller
wavelength ranges. This concept of photon splitting should reduce the biasing factors
which are used in the simulatepropagation function. It will be explained in more
detail in section 3.4.
47
Figure 3.8: Up: Comparison of the SED plots for the scattered flux Fsca of the monochromatic
(red line) and polychromatic photon packages (green line). The green curve is covered by the red
curve because they coincide almost exactly. Down: The difference plot of the scattered fluxes of
both simulations. The differences are expressed in percents.
Figure 3.9: The noise for the scattered flux produced by the panchromatic simulation using 105
polychromatic photon packages. For the calculations, the simulation is run 10 times, having
different random seeds set in the ski-files and run over 20 threads.
48
Figure 3.10: V-band images of the edge-on spiral galaxy produced by the scattered photons,
simulated with the original code (top) and the adapted code (bottom).
Using ds9, we can obtain the two-dimensional V-band images of the scattered flux Fsca
produced by the galaxy. As in the previous section, the model that contains only an ex-
ponential disk is improved to a more realistic case by adding a Sersic bulge to the stel-
lar geometry. The obtained results are displayed in figure 3.10 for both panchromatic
simulations, viewed from an inclination of 88◦. Comparing the V-band image resulting
from the original monochromatic SKIRT code with the V-band image resulting from the
adapted polychromatic SKIRT code will, just as before, be done by creating a residual
frame using eq. 3.2. This frame is shown in figure 3.11, where the color bar ranges
from 0% to 100%. The white pixels can in this case contain percentage differences that
exceed the 100%. It is clear that the edge of the spiral galaxy has errors higher than
87.5%, caused by the very low or even zero detected scattered fluxes in these pixels.
The center of the disk contains pixels with errors below 12.5% for longer wavelengths
(bottom of the figure), while these pixels have higher errors at shorter wavelengths
(top of the figure), due to the high extinction rate in this wavelength band.
49
Figure 3.11: The residual of the V-band image resulting from the simulation using 106 polychro-
matic photon packages and the reference V-band image. From the top image to the bottom the
wavelength is increased from a short to longer wavelength bin. The middle frame represents
the residual frame in the center of the optical range (around 0.55 µm). The color bar represents
percentage differences in levels of 12.5%, ranging from 0% (black) to 100% or more (white).
50
3.3 Optically thin and thick models
In the previous section we found that the scattered flux Fsca of the spiral model for the
monochromatic and polychromatic algorithm were in relative good agreement. The
calculated difference in the SED between both methods where low enough to be able
to conclude that the polychromatic algorithm works properly. However, we only con-
sidered a particular model that contains a dust mass of 4 × 107M . As the optical
depth τλ is given by
τλ = κλρds = κλΣ ,
with Σ the mass integral along the photon’s path and κλ the extinction coefficient, τλ
can obtain large values when the dust mass is increased, making the model optically
thick (τλ >> 1). This will have a dramatic effect on the values of the biasing factors
wλ. Recall eq. 1.12:
wλ = eτre f −τλ
τλ
τre f
1 − e−τre f,path
1 − e−τλ,path
= eΣ.(κre f −κλ) κλ
κre f
1 − e−τre f,path
1 − e−τλ,path
You can see that the weight factor wλ grows exponentially with the mass integral Σ.
Increasing the dust mass in the spiral galaxy, i.e. changing the optically thin system
to an optically thick system, can have as a consequence that the biasing factors raise
to extremely high values. In the model used in the preceding sections, the magnitude
of wλ even exceeds a value of 2800 at a wavelength of 6.30 µm, this is of course un-
acceptable when we want to obtain accurate results. These high weight factors will
have a tremendous effect on the detected scattered flux of the system, especially in the
face-on configuration. An example of this effect is shown in figure 3.12 for Near IR
wavelengths, where the frames on the left obtained by the reference monochromatic
simulations are compared to the frames on the right, resulting from the polychromatic
algorithm. From top to bottom you can see the results from systems having optical
depths τλ equal to 0.1, 1, 10 and 100, specified at a wavelength of 6.30 µm. One can
clearly observe that the images are affected by the large biasing factors when the opti-
cal depth τλ of the system is increased, as the frames become very noisy. Some pixels
seem to detect flux densities that have been boosted to very high values, shown by the
yellow and red pixels divided over the exponential disk. Notice the low fluxes Fsca for
τλ = 0.1, as the optically thin system causes little scattering. At τλ = 100, it seems that
Fsca are smaller as well. This is due to the high amount of photon absorption, by
51
Figure 3.12: Images in the Near IR of the face-on spiral galaxy. The left frames are obtained by
the monochromatic algorithm, the right by the polychromatic algorithm. From top to bottom
the optical depths τλ are increased from 0.1 to 100. Notice the high noise for the optically thick
systems, originating from the extremely high biasing factors by which the luminosities at these
wavelengths are altered.
52
which little scattered photons can reach the detectors.
The table below illustrates how enormous the weight factors can become for the optical
depths used in the above simulations. Notice that in the τλ = 100 case, the highest
encountered weight factor is too big to be stored as a double-value in the code.
τλ wλ,max
0.1 16.2338
1 6.22089 ×1016
10 1.44838 ×10173
100 inf
The effect that these magnitudes of the optical depth τλ have on the accuracy of the
model can be seen in figure 3.13, where the relative differences between the scattered
flux densities produced by the monochromatic and polychromatic photons are cal-
culated for four different τλ-values and two different inclination angles. From these
graphs we can see that for small optical depths the results agree very well. Notice
that for τλ 0.1 a ’tail’ appears at wavelengths longer than 6 µm having higher dis-
crepancies. This is due to the very small magnitudes of the scattered flux Fsca at these
wavelengths, as scattering is very inefficient in the Far IR range. For larger optical
depths, i.e. τλ > 10, higher discrepancies become visible. For the τλ = 100 case,
these relative differences can increase to more than 50%. In other words, when the
optical depth increases, the calculations made with polychromatic photon packages
become increasingly affected by the large range of optical depths encountered at dif-
ferent wavelengths. The used reference wavelength lies in the center of the optical
range, around 0.55 µm, where the extinction coefficient and thus the opacity is very
high. The accuracy suffers the most at longer wavelengths, starting at about 2 µm,
because the extinction coefficient drops to a much lower value there. This causes the
extinction processes to be less efficient at these longer wavelengths, which results in a
lower opacity.
53
Figure 3.13: Difference in SED of the scattered flux between the results obtained with 106
monochromatic photon packages and 106 polychromatic photon packages. The models sim-
ulated are optically thin (top left) to optically thick (bottom right). In the most optically thick
cases the results obtain discrepancies that can increase to 50%.
As has already been pointed out in the previous section, a possible solution to avoid
this drawback is the use of partly polychromatic photon packages. The photon will be split
one or more times, allowing the red photon package, i.e. the photon that contains a
range of longer wavelengths, to possess a higher reference wavelength as a character-
istic. The extinction coefficients at these longer wavelengths will have a value closer to
κre f,red, causing the biasing factors wλ to remain within particular bounds. This tech-
nique, as will be explained in the following section, will make it possible to limit the
magnitude of the biasing factors.
54
3.4 Splitting of the photon packages
3.4.1 The concept of photon splitting
The polychromatic algorithm has been tested for some basic stellar and dusty mod-
els, showing that it does indeed reproduce approximately the same results as obtained
with the monochromatic simulation. In very optically thick models (having opacities
τλ >> 1), extremely large weighting factors can occur as eq. 1.12 can increase without
bound. This suggest that a proper choice of a reference wavelength is indeed crucial
for the accuracy of the method.
The disadvantages resulting from the large weighting factors can probably be allevi-
ated by splitting the photon packages in multiple parts. In this way, partly polychromatic
photon packages are created, which contain a range of wavelengths with a certain refer-
ence wavelength, but not the whole electromagnetic spectrum. The mechanism should
be implemented in such a way that a particular photon package continues splitting,
until it contains a range of weighting factors close enough to unity. This mechanism
is represented in a simplistic manner in figure 3.14, where an original photon is split
into a blue and a red photon. The splitting happens at the index of the reference wave-
length. The red photons, i.e. the photons containing the wavelength range above the
reference wavelength, will consecutively be stored in a deque, while the blue photons
will continue there calculations.
From equation 1.12, it can be seen that the weight factors grow exponentially with the
magnitude of the random interaction optical depth τλ. In other words, the larger the
random propagation path s of the photon package is, the higher the biasing factors at
λ1
... λref
... λN
Original photon
 
 
  ©
d
d
dd‚Blue photon Red photon
λ1 ...λref,blue... λref λref+1 ...λref,red... λN
 
 
  ©
d
d
dd‚
Blue-blue photon Blue-red photon
Figure 3.14: Demonstration of the splitting of a photon package into a blue and a red part. The
simulation will continue with the blue photon package and split this further if necessary.
55
certain wavelengths will be. Due to this, the implementation of the photon splitting
should be within the fillDustSystemPath function, such that the photon is split be-
fore the random interaction point is drawn. In the opposite case, when the random
scattering location is determined before the photon is split, every photon that would
propagate to and interact close to the edge of the medium, where large interaction op-
tical depths are found, will be forced to interact deeper in the medium, where smaller
interaction optical depths are found. This is of course undesirable if we want to obtain
accurate results.
In the function fillDustSystemPath, a flag ynsplit is defined, initially set as false,
indicating whether the photon package has been split or not. As follows, a routine
that calculates the biasing factors at the end of the photon’s path wλ,path begins. This
is done by using τλ = τλ,path and τre f = τre f,path in equation 1.12, hence calculating
q(τλ,path)/q(τre f,path). Whenever one of the weight factors wλ,path reaches a value that
deviates to much from unity, the photon will be split. This deviation limit has to be de-
termined experimentally, which will be done in section 3.4.2. A red package is created,
having all the characteristics of the original photon package except for the minimum
and maximum wavelength indexes, and stored at the end of the deque. In the origi-
nal package, now referred to as the blue photon package, a new reference wavelength
(set as the center of the photon’s wavelength range) and maximum wavelength index
(above which the luminosities are set to zero) are indicated. The ynsplit boolean is set
true such that the calculation can start over.
This process will continue, every time performed with the blue part of the package
that has been split, until the range of biasing factors for that particular blue photon is
within the selected limits. The routine described in section 2.3 now continues by call-
ing the different functions that describe the blue photon package’s life cycle.
Every partly polychromatic photon that is stored in the deque will consecutively un-
dergo the life cycle described in section 2.3, before a new polychromatic photon pack-
age can be launched. Notice that a small adjustment must be made in order to termi-
nate the photon’s life cycle. At the moment, a photon dies when its total luminosity
decreases below a minimum value, determined by the total luminosity of the system
Ltot and the amount of photon packages in the simulation by the relationship:
Lmin = 10−4
×
Ltot
Npp
. (3.3)
When a photon is split into multiple parts, the different photon-parts will cover smaller
wavelength ranges and thus will contain smaller total luminosities. This can induce a
problem, as the split photons will probably terminate their life cycles too soon, based
on the minimum luminosity in eq. 3.3. To solve this problem, Lmin is redefined by
56
using the largest luminosity Lλ,max that a photon package, which is taken from the
deque, contains:
Lmin = 10−4
× Lλ,max . (3.4)
This maximal luminosity is extracted from the photon by implementing a small func-
tion maxluminosity in the PhotonPackage class. Whenever the maximal luminosity a
photon contains drops below this critical value 3.4, its life cycle is terminated.
3.4.2 Determination of the optimal biasing limit
Determining at which value of wλ,path a photon package should exactly split should
be determined experimentally. In this project we do not try to increase only the accu-
racy of the simulations, but the speed as well. Our purpose now is to find an optimal
balance between these two benefits of (partly) polychromatic photon packages. As the
basing factor wre f,path at the reference wavelength equals to one, we speak of a devia-
tion from unity, determined by |wλ,path − 1|. In the case the maximal deviation, which
we will now refer to as bdev, is extremely small, the simulation that uses partly poly-
chromatic photons will reduce to the simulation using monochromatic photons. In this
case, the biasing factors will all equal unity and hence the accuracy is optimized. Of
course, when a lot of the photon package splitting occurs, the simulation time will un-
necessarily increase. On the other hand, when bdev is extremely large, the simulation
reduces to the polychromatic case where the splitting technique is not used. In this
case the accuracy will be lower, but the simulation will speed up. Note that for the
spiral model that is used in the previous sections, wλ,path can gain values up to 2000,
although they are not very common.
Amount of photon splitting
To get a picture of the amount of photon splitting occurring in a simulation which uses
a particular value of bdev, we can plot the splitting percentage, given by
Nsplit
Npp
× 100% ,
in function of these bdev-values. For this, a counter is added in the code to keep a
track of the amount of splitting Nsplit happening. Recall from the previous section that
one photon can split multiple times, by which the splitting percentage can exceed the
100%. In the spiral model used in this project, the wavelength range is divided into
61 bins, causing the maximal possible value of the splitting percentage to be 6000%,
as each photon package can split 60 times. Some arbitrary values of bdev are chosen,
57
given in Table 3.1, for which every simulation is run 10 times with a different random
seed set in the ski-file. Mark that the simulations are done using only one thread.
Otherwise, due to the parallelization in SKIRT, a photon that is stored in the deque
can be extracted and calculated multiple times. The amount of split photons for each
simulation corresponding to every bdev-value are shown in the table, together with
their calculated average and standard deviation. This result is graphically shown in
figure 3.15, where you can see the average splitting values in function of bdev. The
error bars are too small to be visible and indicate the 1σ uncertainty given in the table.
As expected, we obtain a declining curve, i.e. more photons are split when a strict (low)
limit is used compared to a more lax limit. When choosing a low deviation limit bdev,
the simulation time would increase, which is of course undesirable. This means that a
high value of bdev, causing little or no photon splitting, is preferred to get an optimal
speed. However, the reason this technique was introduced was in order to obtain more
accurate results. As we will see in the next section, this means that a lower value of bdev
is needed. We can already conclude that the most important value for the limit of the
biasing factor will be found between the values bdev = 0.1 and 10. For bdev higher than
10 the photon splitting becomes negligible, while for bdev lower than 0.1 the amount
of photon splitting becomes incredible high. Mark that for an optically thick system,
a higher splitting percentage is expected for a fixed biasing limit in comparison to an
optically thin system.
Figure 3.15: The percentage of split photon packages with respect to the value of bdev. The error
bars, which are to small to be visible, indicate the 1σ deviation from the mean. The simulations
are run with 106 partly polychromatic photon packages.
58
TineGeldof_Thesis fysica
TineGeldof_Thesis fysica
TineGeldof_Thesis fysica
TineGeldof_Thesis fysica
TineGeldof_Thesis fysica
TineGeldof_Thesis fysica
TineGeldof_Thesis fysica
TineGeldof_Thesis fysica
TineGeldof_Thesis fysica
TineGeldof_Thesis fysica
TineGeldof_Thesis fysica
TineGeldof_Thesis fysica
TineGeldof_Thesis fysica
TineGeldof_Thesis fysica
TineGeldof_Thesis fysica
TineGeldof_Thesis fysica
TineGeldof_Thesis fysica
TineGeldof_Thesis fysica
TineGeldof_Thesis fysica
TineGeldof_Thesis fysica
TineGeldof_Thesis fysica
TineGeldof_Thesis fysica

More Related Content

Viewers also liked

Viewers also liked (6)

Presentación
PresentaciónPresentación
Presentación
 
Folleto resumen CPIP
Folleto resumen CPIPFolleto resumen CPIP
Folleto resumen CPIP
 
Portfolio
PortfolioPortfolio
Portfolio
 
Smart City vs. Smart Stadium
Smart City vs. Smart StadiumSmart City vs. Smart Stadium
Smart City vs. Smart Stadium
 
Data et Collectivités : accès aux données et nouveaux services (Introduction)
Data et Collectivités : accès aux données et nouveaux services (Introduction)Data et Collectivités : accès aux données et nouveaux services (Introduction)
Data et Collectivités : accès aux données et nouveaux services (Introduction)
 
Engage in effective collaboration with Azure AD B2B
Engage in effective collaboration with Azure AD B2BEngage in effective collaboration with Azure AD B2B
Engage in effective collaboration with Azure AD B2B
 

Similar to TineGeldof_Thesis fysica

Absorption Enhancement by Light Scattering for Solar.pdf
Absorption Enhancement by Light Scattering for Solar.pdfAbsorption Enhancement by Light Scattering for Solar.pdf
Absorption Enhancement by Light Scattering for Solar.pdfMarifeAlcantaraCaira
 
martinthesis
martinthesismartinthesis
martinthesisMartin L
 
Global Illumination Techniquesfor the Computation of High Quality Images in G...
Global Illumination Techniquesfor the Computation of High Quality Images in G...Global Illumination Techniquesfor the Computation of High Quality Images in G...
Global Illumination Techniquesfor the Computation of High Quality Images in G...Frederic Perez
 
MSc_thesis_OlegZero
MSc_thesis_OlegZeroMSc_thesis_OlegZero
MSc_thesis_OlegZeroOleg Żero
 
Predicting neutrino mass constraints from galaxy cluster surveys
Predicting neutrino mass constraints from galaxy cluster surveysPredicting neutrino mass constraints from galaxy cluster surveys
Predicting neutrino mass constraints from galaxy cluster surveysjlynnmuir
 
Photometry of the UWISH2 extended H2 source catalogue
Photometry of the UWISH2 extended H2 source cataloguePhotometry of the UWISH2 extended H2 source catalogue
Photometry of the UWISH2 extended H2 source catalogueJack Nicholas
 
Probing new physics on the horizon of black holes with gravitational waves
Probing new physics on the horizon of black holes with gravitational wavesProbing new physics on the horizon of black holes with gravitational waves
Probing new physics on the horizon of black holes with gravitational wavesSérgio Sacani
 
Zuo et al., 2016, JGE_Fractal modelling.pdf
Zuo et al., 2016, JGE_Fractal modelling.pdfZuo et al., 2016, JGE_Fractal modelling.pdf
Zuo et al., 2016, JGE_Fractal modelling.pdfVictorValdivia20
 
edepotlink_t55083298_001.compressed
edepotlink_t55083298_001.compressededepotlink_t55083298_001.compressed
edepotlink_t55083298_001.compressedEduardo Barbaro
 
S.Denega_thesis_2011
S.Denega_thesis_2011S.Denega_thesis_2011
S.Denega_thesis_2011Sergii Denega
 
Thesis Fabian Brull
Thesis Fabian BrullThesis Fabian Brull
Thesis Fabian BrullFabian Brull
 

Similar to TineGeldof_Thesis fysica (20)

Absorption Enhancement by Light Scattering for Solar.pdf
Absorption Enhancement by Light Scattering for Solar.pdfAbsorption Enhancement by Light Scattering for Solar.pdf
Absorption Enhancement by Light Scattering for Solar.pdf
 
martinthesis
martinthesismartinthesis
martinthesis
 
KJM3020-Lars Kristian Henriksen
KJM3020-Lars Kristian HenriksenKJM3020-Lars Kristian Henriksen
KJM3020-Lars Kristian Henriksen
 
MikkelJuhlHobertMastersThesis
MikkelJuhlHobertMastersThesisMikkelJuhlHobertMastersThesis
MikkelJuhlHobertMastersThesis
 
Global Illumination Techniquesfor the Computation of High Quality Images in G...
Global Illumination Techniquesfor the Computation of High Quality Images in G...Global Illumination Techniquesfor the Computation of High Quality Images in G...
Global Illumination Techniquesfor the Computation of High Quality Images in G...
 
MyThesis
MyThesisMyThesis
MyThesis
 
MSc_thesis_OlegZero
MSc_thesis_OlegZeroMSc_thesis_OlegZero
MSc_thesis_OlegZero
 
dissertation
dissertationdissertation
dissertation
 
Graphene Quantum Dots
Graphene Quantum DotsGraphene Quantum Dots
Graphene Quantum Dots
 
Thesis_de_Meulenaer
Thesis_de_MeulenaerThesis_de_Meulenaer
Thesis_de_Meulenaer
 
Predicting neutrino mass constraints from galaxy cluster surveys
Predicting neutrino mass constraints from galaxy cluster surveysPredicting neutrino mass constraints from galaxy cluster surveys
Predicting neutrino mass constraints from galaxy cluster surveys
 
thesis
thesisthesis
thesis
 
Photometry of the UWISH2 extended H2 source catalogue
Photometry of the UWISH2 extended H2 source cataloguePhotometry of the UWISH2 extended H2 source catalogue
Photometry of the UWISH2 extended H2 source catalogue
 
Probing new physics on the horizon of black holes with gravitational waves
Probing new physics on the horizon of black holes with gravitational wavesProbing new physics on the horizon of black holes with gravitational waves
Probing new physics on the horizon of black holes with gravitational waves
 
ddmd
ddmdddmd
ddmd
 
Zuo et al., 2016, JGE_Fractal modelling.pdf
Zuo et al., 2016, JGE_Fractal modelling.pdfZuo et al., 2016, JGE_Fractal modelling.pdf
Zuo et al., 2016, JGE_Fractal modelling.pdf
 
edepotlink_t55083298_001.compressed
edepotlink_t55083298_001.compressededepotlink_t55083298_001.compressed
edepotlink_t55083298_001.compressed
 
S.Denega_thesis_2011
S.Denega_thesis_2011S.Denega_thesis_2011
S.Denega_thesis_2011
 
ThesisJoshua
ThesisJoshuaThesisJoshua
ThesisJoshua
 
Thesis Fabian Brull
Thesis Fabian BrullThesis Fabian Brull
Thesis Fabian Brull
 

TineGeldof_Thesis fysica

  • 1. Faculty of Sciences Department of Physics and Astronomy Polychromatic Monte Carlo dust radiative transfer Tine Geldof Promoter: Prof. Dr. Maarten Baes Copromoter: Peter Camps A Thesis submitted for the degree of Master in Physics and Astronomy Year 2013 - 2014
  • 2.
  • 3. Contents 1 Introduction 7 1.1 Interstellar dust in galaxies . . . . . . . . . . . . . . . . . . . . . . . . . . 7 1.2 The SKIRT program . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 1.2.1 Dust radiative transfer in SKIRT . . . . . . . . . . . . . . . . . . . 10 1.2.2 Optimization techniques . . . . . . . . . . . . . . . . . . . . . . . . 12 1.3 Polychromatism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 1.3.1 Polychromatism in SUNRISE . . . . . . . . . . . . . . . . . . . . . 16 1.3.2 Polychromatism in SKIRT . . . . . . . . . . . . . . . . . . . . . . . 18 1.3.3 Problem and solution . . . . . . . . . . . . . . . . . . . . . . . . . 20 1.4 Overview of the symbols . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 2 Implementation of the polychromatic photon packages 23 2.1 The different stages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 2.2 Stellar emission of the photon packages . . . . . . . . . . . . . . . . . . . 26 2.3 Propagation through the dust . . . . . . . . . . . . . . . . . . . . . . . . . 29 2.4 Thermal dust emission . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 2.5 The dust self-absorption . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 2.6 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 3 Findings 37 3.1 Stellar model without dust: bi-Plummer model . . . . . . . . . . . . . . . 37 3.2 Stellar model with dust component . . . . . . . . . . . . . . . . . . . . . . 40 3.2.1 Launching of the photon packages . . . . . . . . . . . . . . . . . . 41 3.2.2 Absorption of the photon packages . . . . . . . . . . . . . . . . . 42 3.2.3 Scattering of the photon packages . . . . . . . . . . . . . . . . . . 47 3.3 Optically thin and thick models . . . . . . . . . . . . . . . . . . . . . . . . 51 3.4 Splitting of the photon packages . . . . . . . . . . . . . . . . . . . . . . . 55 3.4.1 The concept of photon splitting . . . . . . . . . . . . . . . . . . . . 55 3.4.2 Determination of the optimal biasing limit . . . . . . . . . . . . . 57 3.4.3 Accuracy of the results . . . . . . . . . . . . . . . . . . . . . . . . . 63
  • 4. 3.5 Effects of dust self-absorption and emission . . . . . . . . . . . . . . . . . 65 3.5.1 Thermal dust emission . . . . . . . . . . . . . . . . . . . . . . . . . 65 3.5.2 Reaching internal equilibrium . . . . . . . . . . . . . . . . . . . . 69 3.6 RGB images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71 3.7 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73 4 Discussion and future prospects 75 5 Discussie 77 2
  • 5. FACULTEIT WETENSCHAPPEN Vakgroep Vaste-Stofwetenschappen Faculteit Wetenschappen – Vakgroep Vaste-Stofwetenschappen Krijgslaan 281 S1, B-9000 Gent www.UGent.be Geachte heer/mevrouw, Ik wil hierbij graag de toelating geven aan mevrouw Tine Geldof, studente Master Fysica & Sterrenkunde aan de Universiteit Gent, om haar masterthesis in het Engels te schrijven, gezien het internationale karakter van het thesiswerk. Ik hoop u hiermee van dienst te zijn. Met de meeste hoogachting, Prof. Dr. Dirk Poelman Voorzitter examencommissie Fysica & Sterrenkunde uw kenmerk xxxxx contactpersoon Dirk Poelman ons kenmerk xxxxx e-mail dirk.poelman@UGent.be datum 22-04-2014 tel. en fax T +32 9 264 43 67 F +32 9 264 49 96
  • 6. 4
  • 7. Acknowledgement This master thesis is the final result of my education at the Ghent University in order to achieve the degree of Master in Physics and Astronomy. Working on a thesis project requires a lot of time, energy and perseverance, but is especially intellectually stimulat- ing and challenging. This is why I would like to thank my promoter Prof. Dr. Maarten Baes, as he gave me the opportunity and courage to work on this project. I would like to thank him for his guidance throughout the past year and in particular his ongoing enthusiastic supervision. I would like to thank PhD student Peter Camps as well, who helped me at all stages of my master thesis. In particular his insights about the working of the SKIRT program and the Qt Creator development. It was truly a pleasure to work under his guidance. I would also like to thank my fellow student Sam Verstocken, for his insights and help with whatever small problem I had, and my family and other colleagues for coping with my moments of stress during the past year. Tine Geldof
  • 8. Summary The study of special dusty astrophysical objects in the universe - such as spiral and el- liptical galaxies, galaxy clusters and even galaxy clouds - is an important development in the astrophysical research of today. SKIRT, acronym for Stellar Kinematics Including Radiative Transfer, is an advanced 3D continuum radiative transfer code based on the Monte Carlo algorithm. A simulation consists of consecutively following the individ- ual path of each single photon package trough the dusty medium. SKIRT is one of the programs that simulates and studies these astrophysical objects in detail and is devel- oped by the UGent astronomy department. It currently uses monochromatic photon packages and describes their life cycle during their propagation. The purpose of this project is to introduce an optimization technique that uses poly- chromatic photon packages in SKIRT. The main goal is to implement this method and investigate the advantages and disadvantages this implementation will cause. One of them being a reduced noise in the color images of the systems. With this technique, we hope to retrieve accurate results and improve the SKIRT code. On the other hand, inevitably problems will be encountered and discussed during this report, due to the wavelength dependence of different astrophysical parameters used in the calculations. The stepwise implementation will be tested each time with some simple astrophysi- cal models by comparing the codes containing the monochromatic and polychromatic photon packages to one and other. At first the stellar model will be tested without a dusty medium. Further on dust will be inserted to verify the scattering and extinction of the photon packages. A third step is to include dust emission and dust self absorp- tion - the absorption of photon packages that are emitted by the dust itself - in the simulation. The improved program will then be tested on several models with the aim of investigating the accuracy of the results. The source code of the adjustments in SKIRT, written in C++, will not be included in this report. More information about the program, the SKIRT documentation and the down- loading and installation guide, can be found at the SKIRT website: www.skirt.ugent.be.
  • 9. Introduction 1 1.1 Interstellar dust in galaxies In order to study the structure as well as the kinematics of galaxies, we have to obtain their intrinsic three-dimensional distributions by deprojecting observed 2-dimensional images. This deprojection is no easy task and is complicated due to the effects of inter- stellar dust. This constituent of the interstellar medium consists of a variety of macro- scopic solid particles, mainly carbons and silicates. Although dust forms merely a small fraction (about 1% or less) of the total amount of matter within galaxies, it is nevertheless a very import constituent as it affects the starlight on its way trough the galaxy. Hence, interstellar dust affects the projections of the light distributions, giv- ing us a distorted view of the investigated galaxies. It will make many regions of a galaxy opaque, as the stellar photons will interact with the different types of dust grains through the physical processes of absorption and scattering. The efficiency of these two processes, from which the combination is called an extinction process, strongly depends on the wavelength of the light which is transmitted and the prop- erties of the dust grains, e.g. their macroscopic sizes, compositions. A portion of the stellar radiation, which covers the UV and optical wavelengths, will be converted to IR and sub-millimeter radiation. In some special galaxies, it seems that even about 99% of the stellar light is converted to these redder wavelengths [Baes, 2012]. Just as the light profiles of the galaxies are severely affected by the dust, the projected kinematics are too. The stellar kinematics of a galaxy refers to the fact that the stars within the system move in particular orbits. For each star, it is possible to determine their three-dimensional position and velocity. It seems that photons originating from, for example, high velocity stars in the center of opaque regions can’t reach the observer, hence they will not contribute to the line-of-sight velocity distribution (LOSVD) of the galaxy’s spectrum. The kinematics of the galaxy will in this case be biased towards lower line-of-sight velocities [Baes, 2001]. The dust grains within the interstellar medium are found in a broad range of differ- ent types, with varying sizes, compositions, etc. In performing simulations of real-
  • 10. istic models, it is necessary to account for these different dust mixtures. This can be done by defining a number of dust types, consisting of different chemical composi- tions, densities, sizes and shapes. Each dust type will consequently be characterized by a specific absorption coefficient, scattering coefficient and scattering phase function [Steinacker et al., 2013]. These dust properties will be explained later on in this thesis. Accounting for different dust mixtures however complicates the deprojection calcula- tions even more and will hence not be considered in this project. A spiral galaxy viewed from its side, at an inclination of nearly 90 degrees, is referred to as an edge-on spiral galaxy. A special feature of these galaxies is that they seem to be optically thick in their central regions and optically thin in the outer regions. The reason for this is that a spiral galaxy contains interstellar dust that predominantly lies within a thin disk narrower than the stellar disk [Baes, 2012]. Edge-on spiral systems can easily be used to investigate the effects of dust on their ra- diation field, as the dust extinction in the plane of this galaxies will add up along the line of sight. This results in prominent dust lanes which are visible as thin, darkened bands in the UV or optical window, running through the galaxy’s center. This feature is clearly displayed in figure 1.1, where the bulge fraction and the amount of dust is varied from the top left to the bottom right. When increasing the dust fraction in the disk of the galaxy, the dust lane becomes clearer. Due to the typical thermal emission of the dust grains, these dusty bands can also be seen at infrared or sub-millimeter wave- lengths when making use of the appropriate telescopes or simulations which include dust emission. Figure 1.1: Models of an almost edge-on spiral galaxy with an inclination of 88◦, in which the effects of extinction is examined. From left to right an increase in the amount of dust in the disk is shown, while from top to bottom a growing bulge fraction is represented. [Baes, 2012] 8
  • 11. A sophisticated deprojection technique that takes dust absorption, emission and scat- tering into account makes use of the so called radiative transfer equation (RTE), which in a macroscopic way describes the interaction between matter and radiation. Solving this equation is called the RT problem, as it governs the physical process of depro- jection. As information is given in an observed image on the plane of the sky, i.e. a two-dimensional projection of a 3D structure, the inverse RT problem must be solved to obtain the 3D distribution of the examined stellar system. This can be done by a couple of different numerical methods that make use of the RTE: an iteration method, a method based on the expansion in spherical harmonics, a discretization method and a Monte Carlo technique [Baes and Dejonghe, 2002b]. In this project, the Monte Carlo method is of most importance, as it can be used to investigate more complex geome- tries. Moreover, the program that is edited and used in this thesis in particular utilizes the Monte Carlo technique to execute the different simulations. 9
  • 12. 1.2 The SKIRT program SKIRT, acronym for Stellar Kinematics Including Radiative Transfer, is an advanced 3D continuum radiative transfer code based on the Monte Carlo algorithm. The first SKIRT version, written in Fortran 77 and developed by the UGent astronomy department, was developed in 2001 to study the effect of dust absorption and scattering on the ob- served kinematics of early-type galaxies [Baes and Dejonghe, 2002a] [Baes et al., 2003]. Later on, new versions of SKIRT where developed in C++ which focused on dust ab- sorption, scattering and thermal re-emission [Popescu and Tuffs, 2005] [Baes et al., 2005]. The latest version that will be used throughout this report, SKIRT6, contains the most recent functionalities that can easily be edited and updated with new features in the Qt creator development environment [Baes et al., 2011]. Furthermore, the code is par- allelized using Q Threads. This parallelization technique significantly improves the speed of the complicated calculations performed in the code. In SKIRT6, the simulation starts from a 3D model for the stellar objects and dusty sys- tems. The different parameter values describing the model are stored in a ski file, i.e. a file with the ”.ski” extension. Using the Monte Carlo technique, it can calculate the intrinsic properties, e.g. the strength of the radiation field, dust temperature distri- bution, etc., and the observable properties of the models, e.g. images of the galaxies, SEDs (Spectral Energy Distributions), etc. These calculated results are all outputted via ASCII files and FITS (Flexible Image Transport System) images. 1.2.1 Dust radiative transfer in SKIRT The key principle in Monte Carlo radiative transfer simulations is that the radiation field is treated as a flow of a finite number of luminosity packages, with the entire lu- minosity divided among these packages. The simulation itself essentially consists of following the individual path or life cycle of each single photon package trough the dusty medium [Steinacker et al., 2013]. We can consider one of the many photon packages emitted by a stellar object, consist- ing of a (large) number of photons with the same wavelength. Note that, from now on, a single photon package will sometimes be referred to as a photon, while in principle these two are not the same. In the simplest case, this package can be characterized by a luminosity Lλ, a position x of the last emission or interaction and a propagation direction k. The space is divided into a number of cells with a uniform dust density attached to each cell. The path of the package is given by a straight line until it interacts with a dust grain -it can be scattered or absorbed- or leaves the dusty medium.The first Monte Carlo step is to initialize these three properties. 10
  • 13. Initialization The initial luminosity is defined trough the total luminosity of the stellar object and the number of photon packages Npp in the simulation: Lλ = Ltot λ Npp (1.1) This gives every photon package the same luminosity. The initialization of the posi- tion x and direction of emission k is done randomly. A random position is generated from the 3D stellar distribution and the propagation direction is defined by choosing a random position on the unit sphere, as stars typically emit radiation isotropically. Determination of the interaction point The second Monte Carlo step is to determine whether the photon package will interact with a dust cell or will leave the system. To do this, we need the specific intensity Iλ, a conserved quantity that is defined as the intensity per unit of wavelength. The specific intensity represents the amount of energy that is carried by radiation for a given wavelength per unit of solid angle and per unit of time, crossing a unit area perpendicular to the propagation direction. It is thus defined by: dE = IλdA⊥dΩdλdt The radiative transfer equation for one photon package will be given by: dIλ ds = −κλρIλ (1.2) where κλ is the extinction coefficient of the dust at a given wavelength, ρ is the dust density and s is the path-length covered along direction k from the starting point x. The interaction coefficient κλ corresponds to the opacity of the dust, which can be in- terpreted as the impenetrability of the medium and has a contribution of the absorption and scattering processes, hence κλ = κsca λ + κabs λ . The radiative transfer equation 1.2 or RTE can be solved as: Iλ(s) = Iλ(0)exp(−τλ) (1.3) Where we have introduced the optical depth along this particular path as: τλ(s) = s 0 κλρ(s )ds (1.4) 11
  • 14. A random interaction point can now be determined by generating a random optical depth τλ from the probability distribution function (PDF), that is given by an exponen- tial distribution: p(τλ)dτλ = e−τλ dτλ (1.5) Comparing this random optical depth τλ with the maximal optical depth τλ,path, given by equation 1.4 with s going to infinity, determines whether the photon package will interact or not. If τλ > τλ,path, there will be no interaction and hence the photon pack- age will leave the system. If τλ < τλ,path, an interaction will take place at position xint = x + sk, where the physical path length s will be determined by the inverse formula of equation 1.4. Nature of the interaction Deciding the nature of the interaction, i.e. whether a scattering or absorption process takes place, is done by defining a fraction aλ given by: aλ = κsca λ κsca λ + κabs λ , (1.6) where κsca λ is the scattering co¨efficient of the dust and κabs λ the absorption co¨efficient. The fraction aλ is called the dust grain albedo. The nature of the interaction is in this case not chosen randomly. Instead, the photon package will be split into two parts. One part with weight 1 − aλ is absorbed such that a luminosity (1 − aλ)Lλ is stored in the absorbed luminosity counter of the particular cell where the interaction happens. The remaining part with weight aλ is scattered and the Monte Carlo loop for the photon package will continue with a reduced luminosity of aλLλ. A new random direction will be generated and the propagation of the luminosity package continues on a new path. When a photon package holds only 0.01% or less of its original luminosity, the package vanishes and its life cycle is ended [Baes et al., 2011]. 1.2.2 Optimization techniques As the simple Monte Carlo Radiative Transfer method works well for 1D simulations, it is particularly inefficient for general 3D geometries of the stars and dust. Several optimization techniques are developed to eliminate these inefficiencies: continuous absorption method, forced scattering, peel-off technique and polychromatic photon packages [Steinacker et al., 2013]. They are all summarized in figure 1.2 and will be further ex- plained in this subsection. Notice that the absorption-scattering split method of the pho- ton packages, explained in subsection 1.2.1 where the nature of the interaction was 12
  • 15. Figure 1.2: Different optimization techniques in the Monte Carlo Radiative Transfer method. The red cells show the continuous absorption of the photons. The pink arrows represent the forced scattering technique. The light blue and orange arrows represent the peel-off techniques. [Steinacker et al., 2013] treated, is an optimization technique as well, as it tries to maximize the functionality of one photon package. Continuous absorption The Monte Carlo Radiative Transfer method can be enhanced by absorbing along the path of the photon package, instead of absorbing only at the interaction site. In this manner, there will be less noise created by the simulations while using the same amount of packages. This continuous absorption occurs along the entire path, such that the photon is split into N+2 different parts, with N the amount of dust cells along the photons path: Wesc = e−τpath , (1.7) Wsca = a(1e−τpath ) , (1.8) and Wabs,n = (1 − a)(e−τn−1 − e−τn ) (1.9) One part Wesc will leave the system, one part Wsca is scattered at the interaction location determined in section 1.2.1, and N parts Wabs,n are absorbed in the nth cell. In the above equations, τn gives the optical depth measured from the photons location to the surface of the nth cell. The parameter a is the dust grain albedo, this is the fraction of the photon that is scattered during an interaction process. Note that the part that will 13
  • 16. be scattered is the actual part of the photon package that survives and continues in the life cycle. The strength of the continuous absorption optimization method lies in the fact that all photon packages contribute to the calculation of the absorption rate of each cell they pass through. This is particularly useful for systems which are optically thin, as otherwise they would have very few absorptions in the simple MC approach. Forced scattering In the SKIRT program a concept of forced scattering is used such that every ray is forced to contribute to the scattered flux. Otherwise, in a system which would be optically thin, most photon packages would leave the system without any interaction and are hence wasted. In the forced scattering method, the exponential distribution p(τλ) given by eq. 1.5 is replaced by the adjusted distribution q(τλ), which is assigned with a weight Wf s = p(τλ) q(τλ) = 1 − e−τλ,path . q(τλ)dτλ =    e−τλ dτλ 1−e −τλ,path (τλ < τλ,path) 0 (τλ > τλ,path) (1.10) In this equation, the cut off implies that when a random optical depth τλ larger than τλ,path is generated, the exponential distribution q(τλ)dτλ = 0. In other words, when an interaction point outside the system is generated, the probability of scattering at that location is zero. This method forces the simulation to generate a random optical depth τλ that is smaller than τλ,path, and hence an interaction site that is smaller than the system itself. Peel-off technique Simple MCRT is completely inefficient in building up observable images/SEDs, as only the photons that are emitted from the system in the direction of the observer contribute to the output appearance. This flaw can be eliminated if we require that all photons directly contribute to the output images, by creating peel-off photon packages after every emission or scattering. The peel-off photon packages contain a portion of the luminosity of the original photons, which emerge into the direction of the detection instruments. This luminosity fraction is estimated by defining the weight factor of a photon in the direction of the observer: Wppp = p(nobs)e−τobs (1.11) 14
  • 17. where τobs is the optical depth from the position of the emission or scattering event and p(nobs) is the probability that the photon will be directed toward the observer. When gathering all the information of the peel-off photons, the observed results of a real CCD camera can be mimicked, images and SEDs of the models can be created. Polychromatic photon packages The last but most crucial optimization technique for this report is the usage of poly- chromatic photon packages within the panchromatic simulations. In this case, photon packages are emitted which contain photons at many different wavelengths simultane- ously, instead of only one wavelength independently. This technique is still controver- sial as it is not clear that it actually will reproduce accurate results. The main concern of this project is to implement the polychromatic photon packages in the SKIRT program and investigate the pros and cons of the technique. One of the expected benefits is that the usage of these polychromatic photons will speed up the simulations. The MC run will be simultaneously solved at all wavelengths, instead of a run for each wavelength. Another expected benefit is the reduced noise in the color images. If monochromatic photon packages are used, it could be the fact that a certain arbitrary pixel in the RGB image is blue while its neighboring pixel is red. However, when polychromatic pho- ton packages are used, every photon calculated can contribute to the output images at all wavelengths instead of a single wavelength. In this way, a certain pixel in the image will have a contribution of several colors. A detailed explanation how the con- cept of polychromatic photon packages precisely works is given in the next section, where-after the implementation and results will be discussed. The main focus will be the accuracy of the simulations using polychromatic photon packages. This precision will be investigated by comparing the adapted simulation with the original one and by statistically calculate the noise and dispersion on it. 15
  • 18. 1.3 Polychromatism 1.3.1 Polychromatism in SUNRISE The optimization method that makes use of polychromatic photon packages explained in subsection 1.2.2 has already been implemented and tested before in a free Monte Carlo dust radiative transfer code called SUNRISE [Jonsson, 2006]. In the article written about this concept, P. Jonsson insures that the use of polychromatic photon packages significantly improves the calculations in efficiency and accuracy for spectral features. One of the results he obtained is shown in figure 1.3, where the difference between the results using SUNRISE and the RADICAL [Dullemond and Turolla, 2000] code are plotted as function of wavelength for four different optical depths. It is shown that his results agree well for small optical depths, which means that he obtained accurate results for optically thin systems. However, for larger optical depths, and especially for the edge- on configurations, the relative differences reach ±40%, which is larger than the internal differences between the codes. Stratifying the calculations into two wavelength ranges instead of one solves this problem of high discrepancies, making the results agree very well with the monochromatic results. To investigate another advantage of polychromatism, P. Jonsson tried to quantify the relative efficiencies of two methods by defining the efficiency as = F2 λ Tσ2 Fλ , where T was the CPU time required to complete the calculations and σFλ the Monte Carlo sampling uncertainty in the SED. Hence, the efficiency quantifies the inverse of the CPU time necessary to produce results of unit relative accuracy, independent to the number of rays traced. The results of the efficiency of the monochromatic and poly- chromatic methods he obtained in function of wavelength are shown in figure 1.4. It seemed that for low optical depths, i.e. τν <10, the efficiency of the polychromatic al- gorithm (solid lines) exceeds that of the monochromatic one (dashed lines). For τν=10 the efficiency of the monochromatic method overtakes the polychromatic one at long wavelengths and for τν=100, i.e. a very high optical depth, this becomes the case for shorter wavelengths as well. When P. Jonsson uses stratified polychromatic calcula- tions, an efficiency greater than the monochromatic calculations are obtained. 16
  • 19. Figure 1.3: Difference in SED between results from SUNRISE and RADICAL. The results of the polychromatic algorithm are indistinguishable from those obtained with the monochromatic algorithm. For the optically thick case stratified polychromatic calculations are used to avoid the problem of diverging results.[Jonsson, 2006] In this thesis, we will investigate if similar benefits as obtained in SUNRISE can be achieved when this optimization technique is implemented in SKIRT. Our main goal is to look at the obtained accuracy of the calculations when the polychromatic algo- rithm is used. As the code will probably not be implemented on the most efficient way, it will not be possible to compare the speeds of the codes based on the monochro- matic and polychromatic method. Hence, we won’t be focusing on the efficiency of the code, making it impossible to investigate this expected advantage of the optimization technique. 17
  • 20. Figure 1.4: Efficiencies of the polychromatic (solid lines) and monochromatic (dashed lines) method. The stratified polychromatic calculation is shown as a dot-dashed line. For lower optical depths, the efficiency of the polychromatic algorithm exceeds that of the monochro- matic one for all wavelengths. For high optical depths, the stratified calculations are needed to maintain a lower efficiency than in the monochromatic case.[Jonsson, 2006] 1.3.2 Polychromatism in SKIRT In the monochromatic case, every wavelength is treated independently, which is pos- sible because scattering by dust grains is an elastic process. Here we introduce and implement a polychromatic algorithm within the panchromatic simulation of SKIRT. As in this case every ray samples every wavelength, the Monte Carlo Radiative Trans- fer calculations will be solved simultaneously at all wavelengths. The photon packages will contain a list of luminosities, each corresponding with a certain wavelength. To avoid unnecessary calculations, a minimum and maximum wavelength are identified as properties of the photon package, where above and below the given luminosities are zero. A reference wavelength is indicated as a characteristic of the photon packages for the use in the simulation of the propagation and scattering processes. An experimental guess for this reference wavelength lies halfway the optical range, at about 0.55 micron, as the stellar emission of photon packages mostly cover the optical range. This value at the center of the V-band range is a good guess for the stellar emission phase, but whenever the thermal dust emission is taken into account, this reference wavelength should be set halfway the IR wavelengths, as photons emitted from dust grains mostly cover this range. The polychromatic photon packages in SKIRT now include 9 characteristics: • A luminosity vector, containing every wavelength dependent luminosity Lλ, or- dered from the lowest to the highest wavelength contained in the simulation. From this vector, the total luminosity Ltot of the package can be estimated. 18
  • 21. • A constant Nlambda, referring to the amount of wavelength bins included in the simulation. • The minimum wavelength index, for which lower wavelengths contain lumi- nosities equal to zero. • The maximum wavelength index, for which higher wavelengths contain lumi- nosities equal to zero. • The reference wavelength index, from which the random interaction optical depth and random scattering phase function will be drawn. This will be set to a value of 0.55 µm in the stellar emission phase, e.g. halfway the optical range. • The position x of the photon package, which refers to a location in the system where the last process took place. • The propagation direction k of the photon package. • A flag returning true or false, indicating its origin, i.e. if the package is emitted by a star or by a dust grain. • A counter indicating the number of scattering events that the photon package has already experienced. Note that the selected range of wavelengths will be divided in bins, such that every bin represents a certain wavelength group in the simulation and is treated as one wave- length in the calculations. Increasing the required number of bins will hence increase the accuracy of the calculations. The polychromatic photon packages are relevant only within the panchromatic mod- eling. A panchromatic simulation constructs a 3D model for the stars and dust from which it can reproduce the observed images and SEDs over the entire electromag- netic spectrum. The simulations usually span over the UV and sub-millimeter wave- length bands, including absorption, scattering and thermal emission by dust grains [Camps, 2013]. Hence, the implementation of the polychromatic photon packages should be within the PanMonteCarloSimulation class. SKIRT also offers another type of Monte Carlo simulation that operates at only one or more distinct wavelengths rather than a discretized range. Such an oligochromatic simulation does not include thermal dust emission. Necessary adjustments in sub- or super classes commonly used in both the panchromatic and oligochromatic simulation will thus not work for the latter, e.i. the OligoMonteCarloSimulation class. 19
  • 22. 1.3.3 Problem and solution Using a reference wavelength for each photon package is rather a tricky technique, as it can induce a big problem. The probability distributions, from which the interaction optical depths -or thus the path lengths- and the scattering directions of the photon packages are sampled, are wavelength dependent. For example, rays of shorter wave- length will tend to travel shorter distances before interacting, as the dust opacity in- creases towards these wavelengths. Thus, two important astrophysical quantities used in the propagation and scattering of the photon package, namely the interaction optical depth τλ and the scattering phase function Φs(θ, λ), depend on wavelength but will be sampled only for the reference wavelength. These probability distributions will conse- quently only be precisely correct for this reference wavelength, while they will deviate from the exact values at the other wavelengths. The ability to use biased distributions will make an attempt to compensate for this problem. The intensity of the ray, corresponding with a certain wavelength, will be altered at the point of interaction by use of a weight factor wλ. This biasing factor is shown for the forced scattering case in eq. 1.12, where eq. 1.10 is used to calculate the quotient of the probability distributions. wλ = q(τλ) q(τre f ) = eτre f −τλ τλ τre f 1 − e−τre f,path 1 − e−τλ,path (1.12) This biasing factor is of course equal to unity for the reference wavelength. Note that τλ,path is the total optical depth for a given wavelength λ from the point of emission to the edge of the medium in the direction of propagation. As a part of the ray will interact somewhere along the path, the optical depth of the interaction τλ is randomly drawn in the range [0, τλ,path]. The probability of scattering in a certain direction is given by the scattering phase func- tion Φs(θ). In the polychromatic simulation, the scattering angle θ will be drawn from the reference wavelength λre f . The biasing factor which should be used to multiply with the ray intensity after scattering is then given by eq. 1.13. wλ = Φs(θ, λ) Φs(θ, λre f ) (1.13) Note that the errors for a fixed number of rays will probably increase for wavelengths where the dust opacity is very different from the one at the reference wavelength. Wavelengths at the outermost regions will have biasing factors who deviate the most from the reference weight factor, namely 1. This complication can make the biasing 20
  • 23. factors very large, which can dominate the results at these particular wavelengths [Jonsson, 2006]. A proper choice for the reference wavelength is hence crucial for the accuracy of the method. This should be chosen experimentally, such that the range of weighting factors encountered in the problem is minimized. It is not always so easy to select an ideal reference wavelength that won’t be influenced by the above mentioned problem. In some models, for example very optically thick ones, the value of the weight factor in eq. 1.12 will increase without bound. A possible solution is to consider partly polychromatic photon packages, which contain a smaller range of wavelengths and thus not the whole electromagnetic spectrum. This can be done by calculating the biasing factors before the propagation and scattering processes are done. When these weight factors deviate to much from unity - the magnitude of wλ,re f - the photon package can be split into two or more packages, containing a part of the luminosity vector of the original one. This method will be referred to as the photon splitting technique. New reference wavelengths will be calculated for these split lumi- nosity packages, lying between there minimal and maximal wavelengths. The difficulty lies now when to decide whether the biasing factor is too high. The higher these factors become, the less accurate the calculations will be. On the other hand, setting a strict limit to these factors will result in a great amount of photons split, which will slow down the calculations. This is why a certain deviation from unity should be chosen, which gives a good balance between the speed of the calculations and the accuracies they reproduce. 21
  • 24. 1.4 Overview of the symbols The symbols that are used in the previous sections for the description of the radiative transfer algorithm and that will return in the following sections are briefly summa- rized in the table below. Mark that, except for the position, propagation direction and biasing limit, there is a clear dependence on wavelength in the rest of these quantities. Symbol Description x Photon package’s position of the last emission or interaction. k Propagation direction of the photon package from a certain position x. κ Dust opacity or interaction coefficient per unit mass. a Dust grain albedo (the fraction of the photon that is scattered during an interaction). τλ,path Total optical depth of a photons path for a wavelength λ: from the point of emission to the edge of the medium. τλ Randomly drawn interaction optical depth at wavelength λ. Lλ Luminosity of the ray at a wavelength λ. Ltot The total luminosity (for all wavelengths) of a photon or system. wλ Biasing factor used to alter the luminosity Lλ when the position of interaction or the scattering angle is drawn at the wrong wavelength λre f bdev Limit that will be set to retrain the magnitudes of the biasing factors. Ftot,λ Total flux of the system at wavelength λ. Ftrans,λ Total flux of the system at wavelength λ without consideration of dust. Fdir,λ Flux of the system at wavelength λ reaching the detectors directly and influenced by extinction, but not scattering. Fsca,λ Scattered flux of the system at wavelength λ. Fdust,λ Flux of the system emitted by the dust at wavelength λ. 22
  • 25. Implementation of the polychromatic photon packages 2 2.1 The different stages In this section, a brief explanation of the different stages that follow each other in the panchromatic simulation will be itemized. In this way, a clear view can be created about the construction within the SKIRT code. SKIRT actually starts the panchromatic simulation by activating the stellar emission phase. This stage, in the code run by the runstellaremission function, starts and sim- ulates the life cycle of every photon package. In particular, the function gives birth to all of the luminosity packages and, after the radiation emerging towards the detection instruments has been estimated, begins its propagation trough the dust. The photon’s life cycle is schematically explained in figure 2.1. How the birth and initialization of the photons is established will be the topic of section 2.2. There, an unrealistic model will be put together which doesn’t contain dust in its system. As soon as the panchromatic simulation includes a dust system, which usually is the case in all types of stellar objects, dust absorption and scattering will be automatically enabled. The stellar emission phase will simulate the propagation through this dust by entering a cycle consisting of five different steps. These steps will be explained in more detail in section 2.3. When all photons terminate their life cycles and a dusty medium is present in the system, the next stage of the simulation is activated. This is the so called dust self- absorption phase. It will simulate the continuous emission and re-absorption of photon packages through the dust grains. Because this particular phase of the simulation, described in the rundustselfabsorption function, is only possible when dust emission is taken into account, it will be treated in one of the last parts of this thesis (section 2.5). The final phase of a panchromatic simulation is the dust emission phase. This will be explained in section 2.4 and is driven by the rundustemission function. In this stage, the absorbed luminosity of the entire simulation, with or without dust self-absorption, is estimated in such a way that the dust can begin its thermal emission. The photons
  • 26. launch Peeloffemission fillDustSystemPath simulateescapeandabsorption is Ltot > Lmin? simulatepropagation peeloffscattering simulatescattering yes no Terminate life cycle Figure 2.1: The life cycle of an individual photon emitted by a stellar object. This figure shows schematically the successive functions called in the stellar emission phase. The same order of functions (except for calling the launch function) and terminating condition will be used in the dust self-absorption and dust emission phases. Mark that in the dust self-absorption phase no peel-off photons are created. which underwent interstellar reddening will leave the system and peel-off packages are created. Note that if the simulation covers UV and optical bands only, dust emis- sion will most likely be irrelevant. Parallelization The Monochromatic Radiative Transfer code in SKIRT, i.e. every stage explained above, is currently parallelized using multiprocessor threading. This type of programming al- lows multiple threads to exist within a single process. The threads will share the same process’ memory, but are able to perform calculations independently of each other [Quinn, 2004]. This type of shared memory parallelization is called task or control par- allelism and will split the different levels of iterations in the MC run over different nodes. In such way, the SKIRT program can simulate the life cycles of different photon packages simultaneously, by distributing them over all available threads. This tech- nique significantly improves the speed of the calculations and is therefor extremely useful for a very CPU time demanding simulation. Note that in the near future, one of the planned developments of SKIRT is to use MPI (Message Passing Interface) for data parallelism, in order to allow Monte Carlo simulations to be run on distributed memory systems [Baes et al., 2011]. This should speed up the calculations even more when simulating on supercomputers. 24
  • 27. The main task parallelism of the original SKIRT version is situated in the loop over wavelength. This implies that SKIRT runs both the stellar and dust emission phases at different wavelengths simultaneously. When polychromatic photon packages are im- plemented, the iterations in the stellar and dust emission phases will be over different photon packages instead of different wavelengths. A quick adaptation must be made in order to construct a loop over packages containing all wavelengths. 25
  • 28. 2.2 Stellar emission of the photon packages As the photon packages begin their life cycles, they are born or launched from the stel- lar system that is specified by the user. After being created, an optimization technique is called for the first time in which another, peeled-off photon is stripped off the origi- nal photon packages. These two procedures will be called only once for every photon package. launching of the photon packages This particular feature is done by the launch function, who initializes all the proper- ties of a newborn photon package. There are essentially four stellar systems where the launching function is implemented, here itemized as their classes are called in the code: • AdaptiveMeshStellarSystem: this class represents stellar systems defined by the stellar density and properties, such as the metallicity Z of the stellar population or there age, imported from an adaptive mesh data file. • CompStellarSystem: this class represents stellar systems that are the superposi- tion of a number of components. The individual components are defined inter- nally as objects of the StellarComp class. • SPHStellarSystem: this class represents stellar systems defined from a set of SPH (Smoothed Particle Hydrodynamics) star particles, such as for example resulting from a cosmological simulation. The information on the SPH star particles is read from a file. • VoronoiStellarSystem: this class represents stellar systems defined by the stellar density and properties, such as the metallicity Z or the age of the stellar popula- tion, imported from a Voronoi mesh data file. In general, all these stellar systems feature approximately the same launching function to set up a photon package consisting of its 9 initial characteristics described in sec- tion 1.3.2. The flag that returns the origin of the photon is set true, as it is born and emitted from the stellar object itself. As explained in section 1.2.1, the initial position and direction of the photon are generated randomly. An ordered table of floating val- ues, called the normalized cumulative total luminosity vector, must be created. This is for initializing the photons position, as this vector is used to identify in which cell or in which stellar object the emission takes place. Calculating the total luminosity of each cell or object, the normalized cumulative total luminosity vector will have the 26
  • 29. following form: Xtot = 0, Ltot,1 Ltot , Ltot,1 + Ltot,2 Ltot , ..., 1 Here, the total luminosities Ltot,i over each wavelength range belong to every cell or object independently, while Ltot is the total luminosity of the entire system. A particu- lar cell or object can refer to different aspects, depending on the stellar system that is used in the simulation. In the case of an Adaptive Mesh or Voronoi stellar system, these cells are in fact Mesh cells. A random Mesh cell will be determined from the above normalized cumulative total luminosity matrix, allowing us to determine a random position within this cell’s boundaries. When using a SPH stellar system, the i’th ob- ject represents the i’th SPH particle. Once a random SPH particle has been chosen, a position is determined randomly from the smoothed distribution around the particle’s center. Finally, the objects can also refer to different stellar components. In this case, the position of emission is determined randomly from the geometry of the i’th stellar component. A luminosity vector for the photon package is created within this launch function, determined by: Lλ = Lλ,i Ltot Ltot,i 1 Npp Hence, when a photon package is located at a position x within the i’th cell/object, its luminosity at wavelength λ is equal to the luminosity of the cell/object at that par- ticular wavelength, normalized with respect to the total luminosity of the system (the total luminosity of all cells or stellar objects) by the factor Ltot Ltot,i and divided by the total amount of photon packages. Because of this, the minimum and maximum wave- lengths, below and above which now luminosities are found, are equal to those de- termined by the i’th cell/object. The value of the reference wavelength is not yet of importance, but will be at later stages of the simulation. For ease, this property is set to the value closest to the wavelength bin containing 0.55 µm. Creating peel-off photon packages After the stellar photon packages are emitted by the adapted launch function, the peeloffemission is called to estimate the emerging radiation in the direction of the detection instruments. One peel-off photon package is created for each instrument, having 9 equivalent characteristics of the original polychromatic photon package. Only one property, the propagation direction, is altered into the direction of the observers. As the stellar emission is considered to be isotropic, no extra weight factor should be used to compensate for the luminosities emerging in that particular direction. In other words, the probability that the photon would have been emitted towards the detection 27
  • 30. instruments in eq. 1.11 is equivalent with the probability that it is emitted in any other direction. The instruments detect one wavelength at a time (just as in the monochro- matic case), such that a loop over the wavelength indexes should be added in the code of the peel-off technique. Mark that this loop will only cover the range of wavelengths containing luminosities different from zero, indicated by the peel-off photons mini- mum and maximum wavelength indexes. This is useful as it excludes unnecessary calculations. 28
  • 31. 2.3 Propagation through the dust A second step in the implementation is to add a dusty medium to the simulation of a stellar object, which will cause scattering and extinction of the emitted photon pack- ages when they propagate trough this medium. The inclusion of dust can be charged by the user when setting up the ski-file. The life cycle of a photon package in a dust system after it has been launched by stellar emission is summarized in the box below. This five procedures will be iterated until the polychromatic photon package vanishes, disposing of a total luminosity that has decreased below its critical value. fillDustSystemPath simulateescapeandabsorption simulatepropagation peeloffscattering simulatescattering Filling the system with dust The first aspect that must happen after the emission of the photon package, is that its path must be filled up with dust. This is being done in the fillDustSystemPath function, by calculating the information, in particular the optical depth, on the path of the photon package through the system of dust and storing it in a so called Dust- SystemPath object. This calculation is done for every photon package and will still be performed for every wavelength independently, lying in the range of the minimal and maximal wavelength indexes indicated by the package. This means that only a small loop that runs over the wavelengths of the photons must be added in the fillDust- SystemPath function. Simulating the escape and absorption of the package Next, the optimization technique called continuous absorption, demonstrated in figure 1.2, is performed by the function simulateescapeandabsorption. This feature will simulate the escape from the system and the absorption by the dust of a fraction of the wavelength dependent luminosity of a photon package. It actually splits this lu- minosity Lλ in N+2 different parts, as explained in section 1.2.2, with N the number of dust cells along the photons path. The part that scatters is the actual part of the photon package that survives and continues in the photon package’s life cycle. As before, this 29
  • 32. calculation will still be performed for every wavelength. Thus, a small loop ranging over the minimal and maximal wavelength of every photon package is inserted in the function. At this stage of the simulation only single dust components are treated, as multiple dust components will be too complicated. Simulating the propagation through the dust If the total luminosity Ltot of the photon package after the continuous absorption drops below a certain minimum value, its life cycle is ended and this luminosity package won’t be taken into account anymore. When in contrary it doesn’t reach the critical lu- minosity value, the scattered part of the photon package will subsequently propagate trough the dusty medium due to the simulatepropagation function. This function will determine the next scattering location of a photon package, as explained in section 1.2.1, and then simulate the propagation to this position. Here, the problem mentioned in section 1.3.3 arises, as a random interaction optical depth must be sampled from the PDF given in function 1.10, which only will be done at the reference wavelength. All luminosities Lλ of the photon package will in this case have to propagate to a ran- dom interaction point, determined by the value of the optical depth of the photons path τre f,path at the reference wavelength. This is only correct for the luminosity at the reference wavelength Lλre f . To compensate for this miscalculation, the intensity of the ray at different wavelengths at the point of interaction should be multiplied with the previous explained biasing factors wλ. Recall eq. 1.12 for this factors: wλ = eτre f −τλ τλ τre f 1 − e−τre f,path 1 − e−τλ,path This method is schematically displayed with a simplistic model in figure 2.2. The biasing factors in equation 1.12 can be calculated in three steps. First, the optical depth of the photons path τre f,path is retrieved from the DustSystemPath object. This is only done at the reference wavelength λre f , such that the random optical depth τre f can be drawn in the range [0, τre f,path]. A following step is to create a loop over all wavelengths of interest of the package, ranging from the minimum wavelength index to the maximum index. In this loop, τλ,path is extracted as done before. The only difference for the wavelengths other than the reference wavelength is that τλ won’t be drawn randomly. When looking at eq. 1.4 for an arbitrary wavelength, it is possible to divide this by the same equation for the 30
  • 33. Lλ1 ... Lλre f ... LλN λ1 ... λre f ... λN c c c ×wλ1 ×wλre f = 1 ×wλN Lλ1 ... Lλre f ... LλN Figure 2.2: Demonstration of the method of compensation by use of biasing factors. These factors are wavelength dependent and must be calculated using eq. 1.12. Note that at the reference wavelength the weight factor equals unity. reference wavelength to get: τλ τre f = s 0 κλρ(s )ds s 0 κre f ρ(s )ds = κλ κre f . This can also be done for τλ,path and τre f,path, which contain the same integral but with the path length s going to infinity. This gives the same result as in the right hand side of the above equation. The optical depth τλ can thus be estimated using the fact that τλ τre f = τλ,path τre f,path . Here, it is clear that multiple dust components will result in too complicated calcula- tions, as the different mass densities should be added. Eventually we can calculate the biasing factors wλ for every wavelength, using the optical depths calculated before and using eq. 1.12. Simulating peel-off photon packages After the propagation of the photon package to a certain point in the system, peel- off photon packages are created by the peeloffscattering function. This is also one of the optimization techniques discussed in section 1.2.2, where figure 1.2 shows the working of this method. Mark that this function is called just before a scattering event is simulated. In this calculation, a peel-off photon package will be created for every instrument in the instrument system and will be forced to propagate in the direction of the observers. This peel-off package will also contain a list of luminosities, from which every wave- length dependent luminosity Lλ will be detected independently. To compensate for 31
  • 34. the change in propagation direction, Lλ must be altered by multiplying it with an ad- ditional weight factor given by eq. 1.11. This factor, which represents the probability that a photon package would be scattered in the direction kobs if its original propaga- tion direction was k, is wavelength dependent. Because of this, the detection of the polychromatic peel-off photon packages will still be performed for every wavelength independently, such as in the monochromatic case. Simulating the scattering event As a final aspect of the photon package’s life cycle, it will undergo scattering by the dust grains. This procedure is accomplished by the simulatescattering function, which of course increases the number of scattering events experienced by the photon package by one. Secondly, the function will generate a new random propagation di- rection, which will be sampled from the scattering phase function Φs(θ, λre f ), remem- bering that only one dust component is assumed. As the new propagation direction is drawn at the reference wavelength, biasing factors must be calculated for the other wavelengths. Recall the formula for this factors, given in eq. 1.13: wλ = Φs(θ, λ) Φs(θ, λre f ) The concept of this weight factors is just as before, i.e. the intensity of the ray at the different wavelengths must be multiplied with the computed values of the biasing fac- tors in a way demonstrated in figure 2.2. When the photon finishes this last process, it has completed one cycle of his life. Subse- quently, it will start a following cycle by filling the newly generated path of the photon package with dust. All the previous processes will be run again, until the package reaches a total luminosity Ltot lower than Lmin. When this is the case, the photon pack- age dies and the life cycle is terminated. 32
  • 35. 2.4 Thermal dust emission In the preceding sections the thermal emission by the dust grains has not been consid- ered. Only a simple adaptation in the ski file must be made to take this feature into account. As explained in section 2.1, the rundustemission function will be responsible for this feature. Before the life cycle of the photon packages can be started, the total luminosity that is emitted from every dust cell must be determined. Originally, for a monochromatic photon package at wavelength λ and a dust cell m this is just the product of the to- tal luminosity absorbed in that cell, Labs m , and the normalized SED at the particular wavelength λ corresponding to that cell, as obtained from the dust emission library. The same will be done in the adjusted program, such that only a loop over the wave- lengths should be implemented to construct the luminosity vector of the polychromatic photons, determined by Labs m and the normalized dust SED at every wavelength cor- responding to cell m. The reference wavelength will this time be set somewhere be- tween the IR and submm range, as thermal dust emission predominantly covers that wavelength range. It will be determined by the center of the wavelengths the emitted photons cover. A vector Xtot is created that describes the normalized cumulative total luminosity dis- tribution, containing the following components: Xtot,m = m ∑ m =0 Ltot,m Ltot (2.1) This vector is used to generate random dust cells from which photon packages can be launched. The dust emission can start afterwards by launching Npp polychromatic photon packages. The original positions of this photons will be chosen as a random position in the dust cell m, in his turn chosen randomly from the cumulative luminos- ity distribution Xtot. As the reddened photon packages are emitted, their life cycle trough the dust can be- gin, just as explained in section 2.3, by making use of the five specific functions: fill- DustSystemPath, simulateescapeandabsorption, simulatepropagation, peeloffscat- tering and simulatescattering. Note that no further adaptations must me made in these functions. 33
  • 36. 2.5 The dust self-absorption The reddened photon packages, which are created by thermal dust emission after they where absorbed, can undergo a similar process by the rest of the dusty medium. This process is called dust self-absorption and is a method to calculate the internal equilib- rium temperature of the dust. As the thermal emission of dust ranges over the infrared and sub-millimeter spectrum, it should be these wavelengths which are re-absorbed, however interstellar dust is somewhat less efficient in absorbing at these ranges. The relevance of the dust self-absorption process can depend on two aspects: the op- tical depth of the system and the temperature of the dust. If the system contains a modest amount of dust, having a low optical depth throughout the medium, it will be transparent to long-wavelength radiation. When the system contains a considerable amount of dust, making it opaque, this process will become important as mid- and far-infrared radiation will be more likely to be absorbed. Secondly, the dust tempera- ture affects the likelihood of absorption as well. An increasing temperature of the dust can shift the dust SED peak to shorter wavelengths, causing it to enter the NIR and optical ranges. Dust is more efficient in absorbing these wavelengths, making the dust self-absorption process more important in these cases. The dust self-absorption function consist of an outer loop which in a continuous man- ner absorbs and re-emits the radiation several times. This outer loop, and hence also the function itself, terminates when either the maximum number of dust self-absorption iterations has been reached, which is set to a value of 100, or when the total luminosity absorbed by the dust is stable, i.e. when it does not change by more than 1% compared to the previous cycle. Before the life cycle of the emitted polychromatic photon packages is started, a similar process as in section 2.4 must be considered. This is determining the total luminosity Ltot,m absorbed in every dust cell, that hence will be emitted. Again a normalized cu- mulative total luminosity distribution as in eq. 2.1 is created, used to determine the random generated dust cells from which the photons will be emitted. When the life cycle within this process starts, the peel-off technique (i.e. the function peeloffscattering) should not be considered, as the dust self-absorption phase is a technique for computing the internal equilibrium of the dust grains. As the emergent radiation field doesn’t have to be estimated, the life cycle of a photon package within this phase is somewhat more simplistic than before. When this function is terminated, the function explained in section 2.4 which calcu- lates the last emission of the dust and afterwards estimates the radiation emerging in the direction of the instruments is called. 34
  • 37. 2.6 Overview In the preceding sections the different stages within the panchromatic simulation in SKIRT were explained in detail. We described all the adaptations in the code that were necessary to implement the polychromatic algorithm. These implementations were done in different classes, from which the most important ones were the PhotonPackage class, the MonteCarloSimulation class and the PanMonteCarloSimulation class. Note that the necessary adjustments in sub- or super classes commonly used in both the panchromatic and oligochromatic simulation of SKIRT will not work for the latter. The most difficult part was the implementation of the biasing factors (eq. 1.12 and 1.13) that were needed to compensate for the incorrect sampling of the interaction op- tical depth and scattering angle at wavelengths other than the reference wavelength. This was explained in section 2.3. It is now possible to investigate the results obtained by the polychromatic algorithm. This will be done in the next chapter by comparing the original SKIRT code with the adapted one and verifying their similarities and differences. The out coming results will mostly be tested for accuracy. 35
  • 38. 36
  • 39. Findings 3 3.1 Stellar model without dust: bi-Plummer model For investigating the first step of the implementation, it is recommended to choose a very simplistic galaxy model. Here, a spherical stellar model is used, called a Plummer galaxy, which in a relatively good way can represent elliptical galaxies. We create such a simple ski file consisting of two unequal Plummer model stellar systems, each con- taining a black body spectral energy distribution (SED). The first Plummer model has a black body temperature of 104 K, while the other galaxy is one order colder, having a temperature of 103 K. The peak of the spectrum of the latter will hence be visible at longer wavelengths than the first. For running the simulation, 107 photon packages are launched within the wavelength range of 0.1 − 10 µm, divided into 21 wavelength grid points. The more wavelength bins are used to calculate the stellar emission, the more accurate the results will be. Only wavelengths within the optical range are taken into account, since there is no emission of the dust grains nor a dust system present in this model. This simple ski model, without any dust components, is an ideal way to check the validity of the implementation of the launch function, i.e. the function that creates the photon packages. As no dusty medium is present, the photon packages won’t undergo any interaction when propagating towards the observers. They will only be emitted by the stellar systems, where-after the peel-off technique is used to estimate the radiation which emerge in the directions of the detectors. For now, the setting of the reference wavelength won’t be of importance, as this value is not used in the launch function or the peeloffemission function. On the other hand, this will become significant when- ever a dust system is present in the model. When looking at the SED files, we can distinguish five different fluxes spanning over multiple wavelength bands. • Total flux: the total flux detected by the instruments. It contains the direct, scat-
  • 40. tered and dust flux. • Direct flux: the flux resulting from the stellar photon packages which directly reach the instruments. • Scattered flux: the flux resulting from stellar photon packages that where scat- tered by the dust before they reached the instruments. As no dust is included in this model, this flux component will be zero. • Dust flux: the flux resulting from photon packages that were emitted by the dust. As no dust is included in this model, this flux component will be zero as well. • Transparent flux: flux that would be detected by the instruments if there were no dust in the system. In this model, the transparent flux will be equal to the direct flux. In figure 3.1 the total fluxes of the original and adjusted SKIRT codes are compared to each other by plotting their SEDs on one graph. You can clearly distinguish the stellar emission of both Plummer models, as they result in two separated black body curves. The hottest stellar object is visible at shorter wavelengths, between 0.1 µm and 1 µm, while the coldest is visible at the longer wavelengths, between 1 µm and 10 µm. It is clear that both curves, showing the fluxes of the monochromatic and polychromatic emitted photons, coincide very well, with an almost negligible relative difference rang- ing over 0.016%. As a reference, a dashed line indicating the zero percent is drawn on the graph. One can say that both simulations are in good agreement, hence the emission of the polychromatic photon packages from the stellar objects are performed correctly within the adapted panchromatic simulation. 38
  • 41. Figure 3.1: Top: Comparison of the SED plots for the total flux of the monochromatic (red line) and polychromatic photon packages (green line). Mark that the green curve is covered by the red curve because they coincide almost exactly. Bottom: The difference plot of the total fluxes for the monochromatic and polychromatic photon packages. The differences are expressed in percents. 39
  • 42. 3.2 Stellar model with dust component The main purpose of the SKIRT program is of course studying how the dust affects the simulated galaxies, consisting of emission sources and a dusty medium. Two different important phenomena, i.e. the absorption and the scattering process, of the simulation can be tested to investigate these dust effects. In order to test these two processes, a ski file is created consisting of a stellar object with an exponential disk geometry, contain- ing a disk of dust in the system with an exponential geometry. Note that only one dust mixture should be inserted in the model, as the panchromatic calculations for the poly- chromatic photons would otherwise be too complicated. The default value is used to initiate the dust type, this is an average dust mixture that is appropriate for the typical interstellar dust medium. The stellar SED type used for this system is extracted from the Pegase library. A total amount of 106 photon packages are launched which cover the spectrum over a wavelength range of 0.1 − 10 µm. This range is divided into 61 wavelength grid points. Two detection instruments are set in this model to estimate the radiation it transmits: one instruments looks at the system edge-on, having an inclination of 90 degrees, the other looking at the spiral galaxy face-on, having a zero inclination. As discussed in section 1.1, the edge-on view of the spiral model will be affected the most by the dust, as this dust is located predominantly in the disk of the galaxy. This will also be illus- trated in the results given in the following sections. While testing the adaptations discussed in section 2.3, we can again check the correct launching of the photon packages within this somewhat more difficult model. This will be done in section 3.2.1 by looking at the transparent fluxes in the SEDs of the simulations, hence the fluxes that result from the stellar photon packages without any dust interaction. Afterwards, when the results of the polychromatic launching func- tion is in good agreement with what it should be, the absorption by the dust can be checked. This will be done in section 3.2.2 by examining the direct fluxes in the SEDs of both simulations. These direct fluxes represent the emergent radiation (the stellar emission) of the object, influenced by the absorption by the dust grains. Scattering of the photon packages is not considered in this part, as this will be a third and final step for testing the implementations. Inspecting the dust scattering will be done by looking at the scattered fluxes in the SEDs. This is the most tricky part, as biasing is an impor- tant factor in this stage of the simulation. Section 3.2.3 will show the testing results for this part. 40
  • 43. 3.2.1 Launching of the photon packages The transparent fluxes of both simulations are given in the SED in figure 3.2. This SED of the spiral model is calculated for a detection instrument looking at the system at an inclination of 90 degrees. The investigated transparent fluxes correspond to the fluxes of the simulation without considering the dust interaction. They are hence per- fect for estimating the correct performance of the stellar emission in the simulation of this model. As demonstrated in section 3.1 for the bi-Plummer model, we expect again that the monochromatic and polychromatic photon simulations show approximately the same results, as is confirmed by figure 3.2. The relative difference, displayed in the graph below the SED, even returns values equal to zero percent. This seems some- what strange at first, but we have to mark the fact that the fluxes are stored as data having a maximum number of 8 digits after the decimal point. The transparent fluxes of the simulations hence will have such small errors that are not visible in the data. We can conclude that the panchromatic simulation of this edge-on spiral model us- ing polychromatic photon packages does indeed reproduce the same results as when monochromatic photon packages are used. Figure 3.2: Top: Comparison of the SED plots for the transparent flux of the monochromatic (red line) and polychromatic photons (green line). The green curve is covered by the red curve because they coincide almost exactly. Bottom: The difference plot of the transparent fluxes of both simulations. The differences are expressed in percents. 41
  • 44. 3.2.2 Absorption of the photon packages When examining the direct fluxes of the simulations, we can investigate if the dust affects the stellar photon packages by absorption in the correct manner. The direct flux Fdir represents the radiation received by the detectors, which propagated through the dust without being scattered. Figure 3.3 reflects the SED of the edge-on spiral galaxy for both simulations. Again, the red curve corresponds to the simulation using monochromatic photon packages and the green curve to the simulation using poly- chromatic photon packages. Both coincide very well, having a relative difference be- tween −0.3% and +0.1%. This is still a more than adequate result, as the original sim- ulation itself consists of errors resulting from the random seeds generated for every processor. The internal differences of the code is called noise. Figure 3.3: Top: Comparison of the SED plots for the direct flux of the monochromatic (red line) and polychromatic photons (green line). The green curve is covered by the red curve because they coincide almost exactly. Bottom: The difference plot of the direct fluxes of both simulations. The differences are expressed in percents. 42
  • 45. This so called noise of the Fdir resulting from the monochromatic photon packages, de- tected by the instrument at an inclination of 90◦, is shown in figure 3.4 for a wavelength range covering 0.4 − 3 µm, corresponding to the peak of the SED. To obtain this figure, the simulation is run 10 times, containing 105 monochromatic photon packages. The noise seems to cover a range of 2% (±1%). The black line in the figure represents the mean noise ¯x of the simulations, calculated for every wavelength and having standard deviations 1σλ indicated by the error bars. The standard deviation at every wavelength is calculated as follows: σλ = 1 N − 1 N ∑ i=1 (xi,λ − ¯xλ)2 (3.1) Reducing the noise in these simulations can be done by increasing the amount of monochromatic photon packages. When using for example 108 monochromatic pho- ton packages, the noise can be reduced to almost zero. The same can be done for estimating the noise produced by the polychromatic photon packages in the simulation, displayed in figure 3.5. The relative difference for every simulation, containing 105 polychromatic photon packages, is calculated with respect to a reference simulation containing 108 monochromatic photon packages. As before, the black curve represents the mean noise and the error bars indicate the standard de- viation calculated by eq. 3.1. As you can see in figure 3.5, the simulations provide an error ranging over 1.5%, which is approximately equivalent with the noise produced by the original simulation in figure 3.4. It is of great importance that the relative dif- ferences between the original and adapted simulations are always compared to this noise, to obtain an estimation of how accurate both results are. You may have noticed that the noise produced by the adapted SKIRT code in figure 3.5 is not so random as we expect it to be. Instead, continues curves corresponding to each simulation with a different random seed are produced. This is however not so peculiar, as every wave- length dependent flux Fλ is influenced by the biasing factors used in the SKIRT code. The magnitudes of this weight factors will form a continue curve in function of the wavelength, crossing unity for the reference wavelength. Using the ds9 software, it is possible to visualize this spiral galaxy in a two-dimensional image, as it would look like in the plane of the sky. To obtain a nicer and more realistic image, a bulge is added to the model. The spiral galaxy is now composed of a disk, having an exponential disk geometry, and a bulge, which has a Sersic Geometry. Cal- culating the image composed of the direct flux Fdir only, a result as given in the bottom of figure 3.6 for the simulation that uses polychromatic photon packages is obtained. The spiral galaxy is viewed from an inclination of 88◦. The different colors indicate the magnitude of Fdir, indicated by the color bar below. We can clearly observe the 43
  • 46. structure of the edge-on spiral system, due to the emission by the bulge and the disk of the stellar object. A clear dust lane is visible in the image, which is caused by the dust extinction in the disk of the galaxy. As the differences are to low to visualize, it is not possible to distinct the obtained result with the image of the original simulation that uses monochromatic photon packages, displayed in the top of figure 3.6. To make this percentage difference clear, a residual frame of the two simulations has to be made, shown in figure 3.7. Figure 3.4: The noise resulting from the direct fluxes Fdir of the panchromatic simulation using monochromatic photon packages. The error bars indicate 1σ deviations. The simulation is run 10 times, having different random seeds set in the ski-files and run over 20 threads. Figure 3.5: The noise resulting from the direct fluxes Fdir of the panchromatic simulation using polychromatic photon packages. The error bars indicate 1σ deviations. The simulation is run 10 times, having different random seeds set in the ski-files and run over 20 threads. 44
  • 47. Figure 3.6: V-band images of the edge-on spiral galaxy produced by the panchromatic simula- tion with the original code using 108 monochromatic photons (top) and the adapted code using 106 polychromatic photons (bottom). These images are the results of the direct fluxes Fdir only. This residual frame is calculated as: FrameRes = |FrameRe f − Framesim| |FrameRe f | (3.2) The frame obtained by the original simulation using 108 monochromatic photon pack- ages is set as a reference frame Framere f . Hence, the pixels in the residual frame are obtained out of the difference between the pixel values of the two frames, divided by the pixel values of the reference frame. The color bars represent errors in 12.5 % levels. All white pixels have an error of more than 87.5%, which may exceed 100%. Notice that the outer edges of the edge-on spiral galaxy have the highest errors, due to the detection of very small fluxes in these pixels. A pixels can for example detect only one photon in one frame, arrived at that particular location by the various random pro- cesses, while detecting none in the other frame. These errors at the edges can probably be reduces by using more photon packages in both simulations. The disk has errors ranging up to 37.5%. Another aspect that can be seen is for longer wavelengths (bot- tom of the figure), where the disk contains more pixels with errors below the 12.5%. On the other hand, a shorter wavelength (top of the figure) shows the pattern of a dust lane within the disk, producing higher errors. This is the result of the high efficiency of extinction at these wavelengths. 45
  • 48. Figure 3.7: The residual frames of the V-band images resulting from the simulation using 106 polychromatic photon packages and the reference V-band image. From the top image to the bottom the wavelength is increased from a short to longer wavelength bin. The middle frame represents the residual frame in the center of the optical range (around 0.55 µm). The color bar shows percentage differences in levels of 12.5%, ranging from 0% (black) to 100% or more (white). 46
  • 49. 3.2.3 Scattering of the photon packages Extinction of the photon packages by the dust does not only result from the physical process of absorption, but from scattering as well. Here we want to check whether this scattering process is performed correctly and discover how hard the use of biasing factors affects the out coming results. The photons that will be scattered by the dust, located predominantly in the disk of the galaxy, will most likely be detected by the face- on instrument systems, i.e. the detectors that look at the galaxy from a zero inclination angle. This is due to the absorption in the disc, that prevents photons from being detected in the edge-on view. This is why the results in this section will always be examined for the face-on spiral galaxy. The scattered fluxes Fsca detected by the face-on instruments of both simulations are displayed in figure 3.8 and seem to coincide very well. Notice that we have restricted the range of the scattered flux values of the SED to 4 orders of magnitude. Looking at the relative difference plot, the error seems to be very low for short wavelengths, i.e. for wavelengths below 7 µm the relative difference lies between the −2% and +2%. The red part of the SED spectrum gives a result with higher differences, which is not shown in the figure, ranging to approximately −16%. These wavelengths however contain scattered fluxes having magnitude orders as low as 10−13, for which very small errors can result in very high percentage differences. It is therefor only useful to look at the highest fluxes of the SEDs, ranging over 4 orders of magnitude. We can again examine the noise resulting from the adapted simulation for the scattered fluxes, detected by the face-on instrument, as this is important to estimate the accuracy and correctness of the new SKIRT code. The percentage difference of every simulation, containing a different random seed initiated in the ski-file, is calculated with respect to a simulation containing 108 monochromatic photon packages, as this simulation pro- duces a noise equal to zero due to its high precision. The results are displayed in figure 3.9. The noise covers a range of 2% (± 1%) within 1σ for wavelengths shorter than 1 µm, while it increases for longer wavelengths. We know that using a larger amount of monochromatic photon packages in the original SKIRT code can reduce the noise and increase the accuracy in the simulation as much as possible. This will be the case in the adapted SKIRT code as well, however it will not fulfill, as the biasing factors wλ that induce these large errors depend on parameters of the dust system, i.e. the total dust mass. A possibility to reduce this noise is the use of partly polychromatic pho- ton packages, i.e. to split the photon package into multiple photons containing smaller wavelength ranges. This concept of photon splitting should reduce the biasing factors which are used in the simulatepropagation function. It will be explained in more detail in section 3.4. 47
  • 50. Figure 3.8: Up: Comparison of the SED plots for the scattered flux Fsca of the monochromatic (red line) and polychromatic photon packages (green line). The green curve is covered by the red curve because they coincide almost exactly. Down: The difference plot of the scattered fluxes of both simulations. The differences are expressed in percents. Figure 3.9: The noise for the scattered flux produced by the panchromatic simulation using 105 polychromatic photon packages. For the calculations, the simulation is run 10 times, having different random seeds set in the ski-files and run over 20 threads. 48
  • 51. Figure 3.10: V-band images of the edge-on spiral galaxy produced by the scattered photons, simulated with the original code (top) and the adapted code (bottom). Using ds9, we can obtain the two-dimensional V-band images of the scattered flux Fsca produced by the galaxy. As in the previous section, the model that contains only an ex- ponential disk is improved to a more realistic case by adding a Sersic bulge to the stel- lar geometry. The obtained results are displayed in figure 3.10 for both panchromatic simulations, viewed from an inclination of 88◦. Comparing the V-band image resulting from the original monochromatic SKIRT code with the V-band image resulting from the adapted polychromatic SKIRT code will, just as before, be done by creating a residual frame using eq. 3.2. This frame is shown in figure 3.11, where the color bar ranges from 0% to 100%. The white pixels can in this case contain percentage differences that exceed the 100%. It is clear that the edge of the spiral galaxy has errors higher than 87.5%, caused by the very low or even zero detected scattered fluxes in these pixels. The center of the disk contains pixels with errors below 12.5% for longer wavelengths (bottom of the figure), while these pixels have higher errors at shorter wavelengths (top of the figure), due to the high extinction rate in this wavelength band. 49
  • 52. Figure 3.11: The residual of the V-band image resulting from the simulation using 106 polychro- matic photon packages and the reference V-band image. From the top image to the bottom the wavelength is increased from a short to longer wavelength bin. The middle frame represents the residual frame in the center of the optical range (around 0.55 µm). The color bar represents percentage differences in levels of 12.5%, ranging from 0% (black) to 100% or more (white). 50
  • 53. 3.3 Optically thin and thick models In the previous section we found that the scattered flux Fsca of the spiral model for the monochromatic and polychromatic algorithm were in relative good agreement. The calculated difference in the SED between both methods where low enough to be able to conclude that the polychromatic algorithm works properly. However, we only con- sidered a particular model that contains a dust mass of 4 × 107M . As the optical depth τλ is given by τλ = κλρds = κλΣ , with Σ the mass integral along the photon’s path and κλ the extinction coefficient, τλ can obtain large values when the dust mass is increased, making the model optically thick (τλ >> 1). This will have a dramatic effect on the values of the biasing factors wλ. Recall eq. 1.12: wλ = eτre f −τλ τλ τre f 1 − e−τre f,path 1 − e−τλ,path = eΣ.(κre f −κλ) κλ κre f 1 − e−τre f,path 1 − e−τλ,path You can see that the weight factor wλ grows exponentially with the mass integral Σ. Increasing the dust mass in the spiral galaxy, i.e. changing the optically thin system to an optically thick system, can have as a consequence that the biasing factors raise to extremely high values. In the model used in the preceding sections, the magnitude of wλ even exceeds a value of 2800 at a wavelength of 6.30 µm, this is of course un- acceptable when we want to obtain accurate results. These high weight factors will have a tremendous effect on the detected scattered flux of the system, especially in the face-on configuration. An example of this effect is shown in figure 3.12 for Near IR wavelengths, where the frames on the left obtained by the reference monochromatic simulations are compared to the frames on the right, resulting from the polychromatic algorithm. From top to bottom you can see the results from systems having optical depths τλ equal to 0.1, 1, 10 and 100, specified at a wavelength of 6.30 µm. One can clearly observe that the images are affected by the large biasing factors when the opti- cal depth τλ of the system is increased, as the frames become very noisy. Some pixels seem to detect flux densities that have been boosted to very high values, shown by the yellow and red pixels divided over the exponential disk. Notice the low fluxes Fsca for τλ = 0.1, as the optically thin system causes little scattering. At τλ = 100, it seems that Fsca are smaller as well. This is due to the high amount of photon absorption, by 51
  • 54. Figure 3.12: Images in the Near IR of the face-on spiral galaxy. The left frames are obtained by the monochromatic algorithm, the right by the polychromatic algorithm. From top to bottom the optical depths τλ are increased from 0.1 to 100. Notice the high noise for the optically thick systems, originating from the extremely high biasing factors by which the luminosities at these wavelengths are altered. 52
  • 55. which little scattered photons can reach the detectors. The table below illustrates how enormous the weight factors can become for the optical depths used in the above simulations. Notice that in the τλ = 100 case, the highest encountered weight factor is too big to be stored as a double-value in the code. τλ wλ,max 0.1 16.2338 1 6.22089 ×1016 10 1.44838 ×10173 100 inf The effect that these magnitudes of the optical depth τλ have on the accuracy of the model can be seen in figure 3.13, where the relative differences between the scattered flux densities produced by the monochromatic and polychromatic photons are cal- culated for four different τλ-values and two different inclination angles. From these graphs we can see that for small optical depths the results agree very well. Notice that for τλ 0.1 a ’tail’ appears at wavelengths longer than 6 µm having higher dis- crepancies. This is due to the very small magnitudes of the scattered flux Fsca at these wavelengths, as scattering is very inefficient in the Far IR range. For larger optical depths, i.e. τλ > 10, higher discrepancies become visible. For the τλ = 100 case, these relative differences can increase to more than 50%. In other words, when the optical depth increases, the calculations made with polychromatic photon packages become increasingly affected by the large range of optical depths encountered at dif- ferent wavelengths. The used reference wavelength lies in the center of the optical range, around 0.55 µm, where the extinction coefficient and thus the opacity is very high. The accuracy suffers the most at longer wavelengths, starting at about 2 µm, because the extinction coefficient drops to a much lower value there. This causes the extinction processes to be less efficient at these longer wavelengths, which results in a lower opacity. 53
  • 56. Figure 3.13: Difference in SED of the scattered flux between the results obtained with 106 monochromatic photon packages and 106 polychromatic photon packages. The models sim- ulated are optically thin (top left) to optically thick (bottom right). In the most optically thick cases the results obtain discrepancies that can increase to 50%. As has already been pointed out in the previous section, a possible solution to avoid this drawback is the use of partly polychromatic photon packages. The photon will be split one or more times, allowing the red photon package, i.e. the photon that contains a range of longer wavelengths, to possess a higher reference wavelength as a character- istic. The extinction coefficients at these longer wavelengths will have a value closer to κre f,red, causing the biasing factors wλ to remain within particular bounds. This tech- nique, as will be explained in the following section, will make it possible to limit the magnitude of the biasing factors. 54
  • 57. 3.4 Splitting of the photon packages 3.4.1 The concept of photon splitting The polychromatic algorithm has been tested for some basic stellar and dusty mod- els, showing that it does indeed reproduce approximately the same results as obtained with the monochromatic simulation. In very optically thick models (having opacities τλ >> 1), extremely large weighting factors can occur as eq. 1.12 can increase without bound. This suggest that a proper choice of a reference wavelength is indeed crucial for the accuracy of the method. The disadvantages resulting from the large weighting factors can probably be allevi- ated by splitting the photon packages in multiple parts. In this way, partly polychromatic photon packages are created, which contain a range of wavelengths with a certain refer- ence wavelength, but not the whole electromagnetic spectrum. The mechanism should be implemented in such a way that a particular photon package continues splitting, until it contains a range of weighting factors close enough to unity. This mechanism is represented in a simplistic manner in figure 3.14, where an original photon is split into a blue and a red photon. The splitting happens at the index of the reference wave- length. The red photons, i.e. the photons containing the wavelength range above the reference wavelength, will consecutively be stored in a deque, while the blue photons will continue there calculations. From equation 1.12, it can be seen that the weight factors grow exponentially with the magnitude of the random interaction optical depth τλ. In other words, the larger the random propagation path s of the photon package is, the higher the biasing factors at λ1 ... λref ... λN Original photon       © d d dd‚Blue photon Red photon λ1 ...λref,blue... λref λref+1 ...λref,red... λN       © d d dd‚ Blue-blue photon Blue-red photon Figure 3.14: Demonstration of the splitting of a photon package into a blue and a red part. The simulation will continue with the blue photon package and split this further if necessary. 55
  • 58. certain wavelengths will be. Due to this, the implementation of the photon splitting should be within the fillDustSystemPath function, such that the photon is split be- fore the random interaction point is drawn. In the opposite case, when the random scattering location is determined before the photon is split, every photon that would propagate to and interact close to the edge of the medium, where large interaction op- tical depths are found, will be forced to interact deeper in the medium, where smaller interaction optical depths are found. This is of course undesirable if we want to obtain accurate results. In the function fillDustSystemPath, a flag ynsplit is defined, initially set as false, indicating whether the photon package has been split or not. As follows, a routine that calculates the biasing factors at the end of the photon’s path wλ,path begins. This is done by using τλ = τλ,path and τre f = τre f,path in equation 1.12, hence calculating q(τλ,path)/q(τre f,path). Whenever one of the weight factors wλ,path reaches a value that deviates to much from unity, the photon will be split. This deviation limit has to be de- termined experimentally, which will be done in section 3.4.2. A red package is created, having all the characteristics of the original photon package except for the minimum and maximum wavelength indexes, and stored at the end of the deque. In the origi- nal package, now referred to as the blue photon package, a new reference wavelength (set as the center of the photon’s wavelength range) and maximum wavelength index (above which the luminosities are set to zero) are indicated. The ynsplit boolean is set true such that the calculation can start over. This process will continue, every time performed with the blue part of the package that has been split, until the range of biasing factors for that particular blue photon is within the selected limits. The routine described in section 2.3 now continues by call- ing the different functions that describe the blue photon package’s life cycle. Every partly polychromatic photon that is stored in the deque will consecutively un- dergo the life cycle described in section 2.3, before a new polychromatic photon pack- age can be launched. Notice that a small adjustment must be made in order to termi- nate the photon’s life cycle. At the moment, a photon dies when its total luminosity decreases below a minimum value, determined by the total luminosity of the system Ltot and the amount of photon packages in the simulation by the relationship: Lmin = 10−4 × Ltot Npp . (3.3) When a photon is split into multiple parts, the different photon-parts will cover smaller wavelength ranges and thus will contain smaller total luminosities. This can induce a problem, as the split photons will probably terminate their life cycles too soon, based on the minimum luminosity in eq. 3.3. To solve this problem, Lmin is redefined by 56
  • 59. using the largest luminosity Lλ,max that a photon package, which is taken from the deque, contains: Lmin = 10−4 × Lλ,max . (3.4) This maximal luminosity is extracted from the photon by implementing a small func- tion maxluminosity in the PhotonPackage class. Whenever the maximal luminosity a photon contains drops below this critical value 3.4, its life cycle is terminated. 3.4.2 Determination of the optimal biasing limit Determining at which value of wλ,path a photon package should exactly split should be determined experimentally. In this project we do not try to increase only the accu- racy of the simulations, but the speed as well. Our purpose now is to find an optimal balance between these two benefits of (partly) polychromatic photon packages. As the basing factor wre f,path at the reference wavelength equals to one, we speak of a devia- tion from unity, determined by |wλ,path − 1|. In the case the maximal deviation, which we will now refer to as bdev, is extremely small, the simulation that uses partly poly- chromatic photons will reduce to the simulation using monochromatic photons. In this case, the biasing factors will all equal unity and hence the accuracy is optimized. Of course, when a lot of the photon package splitting occurs, the simulation time will un- necessarily increase. On the other hand, when bdev is extremely large, the simulation reduces to the polychromatic case where the splitting technique is not used. In this case the accuracy will be lower, but the simulation will speed up. Note that for the spiral model that is used in the previous sections, wλ,path can gain values up to 2000, although they are not very common. Amount of photon splitting To get a picture of the amount of photon splitting occurring in a simulation which uses a particular value of bdev, we can plot the splitting percentage, given by Nsplit Npp × 100% , in function of these bdev-values. For this, a counter is added in the code to keep a track of the amount of splitting Nsplit happening. Recall from the previous section that one photon can split multiple times, by which the splitting percentage can exceed the 100%. In the spiral model used in this project, the wavelength range is divided into 61 bins, causing the maximal possible value of the splitting percentage to be 6000%, as each photon package can split 60 times. Some arbitrary values of bdev are chosen, 57
  • 60. given in Table 3.1, for which every simulation is run 10 times with a different random seed set in the ski-file. Mark that the simulations are done using only one thread. Otherwise, due to the parallelization in SKIRT, a photon that is stored in the deque can be extracted and calculated multiple times. The amount of split photons for each simulation corresponding to every bdev-value are shown in the table, together with their calculated average and standard deviation. This result is graphically shown in figure 3.15, where you can see the average splitting values in function of bdev. The error bars are too small to be visible and indicate the 1σ uncertainty given in the table. As expected, we obtain a declining curve, i.e. more photons are split when a strict (low) limit is used compared to a more lax limit. When choosing a low deviation limit bdev, the simulation time would increase, which is of course undesirable. This means that a high value of bdev, causing little or no photon splitting, is preferred to get an optimal speed. However, the reason this technique was introduced was in order to obtain more accurate results. As we will see in the next section, this means that a lower value of bdev is needed. We can already conclude that the most important value for the limit of the biasing factor will be found between the values bdev = 0.1 and 10. For bdev higher than 10 the photon splitting becomes negligible, while for bdev lower than 0.1 the amount of photon splitting becomes incredible high. Mark that for an optically thick system, a higher splitting percentage is expected for a fixed biasing limit in comparison to an optically thin system. Figure 3.15: The percentage of split photon packages with respect to the value of bdev. The error bars, which are to small to be visible, indicate the 1σ deviation from the mean. The simulations are run with 106 partly polychromatic photon packages. 58