In computer graphics, ray tracing is a technique for generating an image by tracing the path of light through pixels in an image plane and simulating the effects of its encounters with virtual objects. The technique is capable of producing a very high degree of visual realism, usually higher than that of typical scanline rendering methods, but at a greater computational cost. This makes ray tracing best suited for applications where the image can be rendered slowly ahead of time, such as in still images and film and television visual effects, and more poorly suited for real-time applications like video games where speed is critical. Ray tracing is capable of simulating a wide variety of optical effects, such as reflection and refraction, scattering, and dispersion phenomena (such as chromatic aberration).
ANURAG TYAGI CLASSES (ATC) is an organisation destined to orient students into correct path to achieve
success in IIT-JEE, AIEEE, PMT, CBSE & ICSE board classes. The organisation is run by a competitive staff comprising of Ex-IITians. Our goal at ATC is to create an environment that inspires students to recognise and explore their own potentials and build up confidence in themselves.ATC was founded by Mr. ANURAG TYAGI on 19 march, 2001.
MEET US AT:
www.anuragtyagiclasses.com
In computer graphics, ray tracing is a technique for generating an image by tracing the path of light through pixels in an image plane and simulating the effects of its encounters with virtual objects. The technique is capable of producing a very high degree of visual realism, usually higher than that of typical scanline rendering methods, but at a greater computational cost. This makes ray tracing best suited for applications where the image can be rendered slowly ahead of time, such as in still images and film and television visual effects, and more poorly suited for real-time applications like video games where speed is critical. Ray tracing is capable of simulating a wide variety of optical effects, such as reflection and refraction, scattering, and dispersion phenomena (such as chromatic aberration).
ANURAG TYAGI CLASSES (ATC) is an organisation destined to orient students into correct path to achieve
success in IIT-JEE, AIEEE, PMT, CBSE & ICSE board classes. The organisation is run by a competitive staff comprising of Ex-IITians. Our goal at ATC is to create an environment that inspires students to recognise and explore their own potentials and build up confidence in themselves.ATC was founded by Mr. ANURAG TYAGI on 19 march, 2001.
MEET US AT:
www.anuragtyagiclasses.com
Laboratory session in Physics II subject for September 2016-January 2017 semester in Yachay Tech University (Ecuador). Topic covered: optics, lenses, convergence, divergence, eye, abnormality
Based on Bruna Regalado's work
SIGGRAPH 2014 Course on Computational Cameras and Displays (part 4)Matthew O'Toole
Recent advances in both computational photography and displays have given rise to a new generation of computational devices. Computational cameras and displays provide a visual experience that goes beyond the capabilities of traditional systems by adding computational power to optics, lights, and sensors. These devices are breaking new ground in the consumer market, including lightfield cameras that redefine our understanding of pictures (Lytro), displays for visualizing 3D/4D content without special eyewear (Nintendo 3DS), motion-sensing devices that use light coded in space or time to detect motion and position (Kinect, Leap Motion), and a movement toward ubiquitous computing with wearable cameras and displays (Google Glass).
This short (1.5 hour) course serves as an introduction to the key ideas and an overview of the latest work in computational cameras, displays, and light transport.
Laboratory session in Physics II subject for September 2016-January 2017 semester in Yachay Tech University (Ecuador). Topic covered: optics, lenses, convergence, divergence, eye, abnormality
Based on Bruna Regalado's work
SIGGRAPH 2014 Course on Computational Cameras and Displays (part 4)Matthew O'Toole
Recent advances in both computational photography and displays have given rise to a new generation of computational devices. Computational cameras and displays provide a visual experience that goes beyond the capabilities of traditional systems by adding computational power to optics, lights, and sensors. These devices are breaking new ground in the consumer market, including lightfield cameras that redefine our understanding of pictures (Lytro), displays for visualizing 3D/4D content without special eyewear (Nintendo 3DS), motion-sensing devices that use light coded in space or time to detect motion and position (Kinect, Leap Motion), and a movement toward ubiquitous computing with wearable cameras and displays (Google Glass).
This short (1.5 hour) course serves as an introduction to the key ideas and an overview of the latest work in computational cameras, displays, and light transport.
This pdf is about the Schizophrenia.
For more details visit on YouTube; @SELF-EXPLANATORY;
https://www.youtube.com/channel/UCAiarMZDNhe1A3Rnpr_WkzA/videos
Thanks...!
Slide 1: Title Slide
Extrachromosomal Inheritance
Slide 2: Introduction to Extrachromosomal Inheritance
Definition: Extrachromosomal inheritance refers to the transmission of genetic material that is not found within the nucleus.
Key Components: Involves genes located in mitochondria, chloroplasts, and plasmids.
Slide 3: Mitochondrial Inheritance
Mitochondria: Organelles responsible for energy production.
Mitochondrial DNA (mtDNA): Circular DNA molecule found in mitochondria.
Inheritance Pattern: Maternally inherited, meaning it is passed from mothers to all their offspring.
Diseases: Examples include Leber’s hereditary optic neuropathy (LHON) and mitochondrial myopathy.
Slide 4: Chloroplast Inheritance
Chloroplasts: Organelles responsible for photosynthesis in plants.
Chloroplast DNA (cpDNA): Circular DNA molecule found in chloroplasts.
Inheritance Pattern: Often maternally inherited in most plants, but can vary in some species.
Examples: Variegation in plants, where leaf color patterns are determined by chloroplast DNA.
Slide 5: Plasmid Inheritance
Plasmids: Small, circular DNA molecules found in bacteria and some eukaryotes.
Features: Can carry antibiotic resistance genes and can be transferred between cells through processes like conjugation.
Significance: Important in biotechnology for gene cloning and genetic engineering.
Slide 6: Mechanisms of Extrachromosomal Inheritance
Non-Mendelian Patterns: Do not follow Mendel’s laws of inheritance.
Cytoplasmic Segregation: During cell division, organelles like mitochondria and chloroplasts are randomly distributed to daughter cells.
Heteroplasmy: Presence of more than one type of organellar genome within a cell, leading to variation in expression.
Slide 7: Examples of Extrachromosomal Inheritance
Four O’clock Plant (Mirabilis jalapa): Shows variegated leaves due to different cpDNA in leaf cells.
Petite Mutants in Yeast: Result from mutations in mitochondrial DNA affecting respiration.
Slide 8: Importance of Extrachromosomal Inheritance
Evolution: Provides insight into the evolution of eukaryotic cells.
Medicine: Understanding mitochondrial inheritance helps in diagnosing and treating mitochondrial diseases.
Agriculture: Chloroplast inheritance can be used in plant breeding and genetic modification.
Slide 9: Recent Research and Advances
Gene Editing: Techniques like CRISPR-Cas9 are being used to edit mitochondrial and chloroplast DNA.
Therapies: Development of mitochondrial replacement therapy (MRT) for preventing mitochondrial diseases.
Slide 10: Conclusion
Summary: Extrachromosomal inheritance involves the transmission of genetic material outside the nucleus and plays a crucial role in genetics, medicine, and biotechnology.
Future Directions: Continued research and technological advancements hold promise for new treatments and applications.
Slide 11: Questions and Discussion
Invite Audience: Open the floor for any questions or further discussion on the topic.
What is greenhouse gasses and how many gasses are there to affect the Earth.moosaasad1975
What are greenhouse gasses how they affect the earth and its environment what is the future of the environment and earth how the weather and the climate effects.
This presentation explores a brief idea about the structural and functional attributes of nucleotides, the structure and function of genetic materials along with the impact of UV rays and pH upon them.
Richard's aventures in two entangled wonderlandsRichard Gill
Since the loophole-free Bell experiments of 2020 and the Nobel prizes in physics of 2022, critics of Bell's work have retreated to the fortress of super-determinism. Now, super-determinism is a derogatory word - it just means "determinism". Palmer, Hance and Hossenfelder argue that quantum mechanics and determinism are not incompatible, using a sophisticated mathematical construction based on a subtle thinning of allowed states and measurements in quantum mechanics, such that what is left appears to make Bell's argument fail, without altering the empirical predictions of quantum mechanics. I think however that it is a smoke screen, and the slogan "lost in math" comes to my mind. I will discuss some other recent disproofs of Bell's theorem using the language of causality based on causal graphs. Causal thinking is also central to law and justice. I will mention surprising connections to my work on serial killer nurse cases, in particular the Dutch case of Lucia de Berk and the current UK case of Lucy Letby.
Deep Behavioral Phenotyping in Systems Neuroscience for Functional Atlasing a...Ana Luísa Pinho
Functional Magnetic Resonance Imaging (fMRI) provides means to characterize brain activations in response to behavior. However, cognitive neuroscience has been limited to group-level effects referring to the performance of specific tasks. To obtain the functional profile of elementary cognitive mechanisms, the combination of brain responses to many tasks is required. Yet, to date, both structural atlases and parcellation-based activations do not fully account for cognitive function and still present several limitations. Further, they do not adapt overall to individual characteristics. In this talk, I will give an account of deep-behavioral phenotyping strategies, namely data-driven methods in large task-fMRI datasets, to optimize functional brain-data collection and improve inference of effects-of-interest related to mental processes. Key to this approach is the employment of fast multi-functional paradigms rich on features that can be well parametrized and, consequently, facilitate the creation of psycho-physiological constructs to be modelled with imaging data. Particular emphasis will be given to music stimuli when studying high-order cognitive mechanisms, due to their ecological nature and quality to enable complex behavior compounded by discrete entities. I will also discuss how deep-behavioral phenotyping and individualized models applied to neuroimaging data can better account for the subject-specific organization of domain-general cognitive systems in the human brain. Finally, the accumulation of functional brain signatures brings the possibility to clarify relationships among tasks and create a univocal link between brain systems and mental functions through: (1) the development of ontologies proposing an organization of cognitive processes; and (2) brain-network taxonomies describing functional specialization. To this end, tools to improve commensurability in cognitive science are necessary, such as public repositories, ontology-based platforms and automated meta-analysis tools. I will thus discuss some brain-atlasing resources currently under development, and their applicability in cognitive as well as clinical neuroscience.
Professional air quality monitoring systems provide immediate, on-site data for analysis, compliance, and decision-making.
Monitor common gases, weather parameters, particulates.
PRESENTATION ABOUT PRINCIPLE OF COSMATIC EVALUATION
Microlensing Modelling
1.
2. ;
Microlensing Modelling with Nested
Sampling
Ashna Sharan
PhD Candidate
Supervisor: Dr. Nicholas Rattenbury
Department of Physics
University of Auckland
7. Gravitational Lensing
Einstein Ring
In case of perfect alignment of the
observer, lens and source, the
multiple distorted images merge
and form a ring-like structure.
8. Gravitational Lensing
Einstein Ring
In case of perfect alignment of the
observer, lens and source, the
multiple distorted images merge
and form a ring-like structure.
Einstein ring mirage wrapped around a galaxy, captured by Hubble Space Telescope.
10. Gravitational Lensing
Microlensing
Images too small
(0.2-2
milliarcseconds)
to be resolved by
telescopes.
Due to the
relative motion of
the source and
lens, we see a
brightening of the
source star.
11. Microlensing - Single Lens
µ0 is the distance of closest impact between the source and
lens.
Lens
θE
Source star
µ0µ(t)
Source trajectories projected onto a lens plane with an Einstein ring.
12. Microlensing - Single Lens
µ0 is the distance of closest impact between the source and
lens.
Lens
θE
Source star
µ0µ(t)
Source trajectories projected onto a lens plane with an Einstein ring.1
2
3
4
5
6
7
8
9
10
11
-1 -0.5 0 0.5 1
Amplification
Time, τ
µ0 = 0.1
µ0 = 0.01
The smaller the µ0, the higher the peak of the lightcurve.
13. Microlensing - Binary Lens
Binary-lens light curves
Depending on the source
lens configuration binary-lens
light curves can take many
forms.
14. Microlensing - Binary Lens
Binary-lens light curves
Depending on the source
lens configuration binary-lens
light curves can take many
forms.
15. Microlensing - Binary Lens
Binary-lens light curves
Depending on the source
lens configuration binary-lens
light curves can take many
forms.
0
2
4
6
8
10
12
14
16
6705 6710 6715 6720 6725 6730 6735 6740
Amplification
Time (JD-245000)
16. Microlensing - Binary Lens
Binary-lens light curves
Depending on the source
lens configuration binary-lens
light curves can take many
forms.
0
2
4
6
8
10
12
14
16
6705 6710 6715 6720 6725 6730 6735 6740
Amplification
Time (JD-245000)
18. Microlensing Modelling Objectives
Need to find the best model to represent the observational
microlensing dataset.
The Parameter Estimation Problem.
For a given model find a set of parameter values to
best-fit the data.
19. Microlensing Modelling Objectives
Need to find the best model to represent the observational
microlensing dataset.
The Parameter Estimation Problem.
For a given model find a set of parameter values to
best-fit the data.
The Model Selection Problem.
Choose between alternative models.
20. The Parameter Estimation Problem
1. Data - Photometric.
2. Model - The Lens Equation.
3. Error Function - difference between the data and the
model’s prediction for any given set of model parameters.
For example, the χ2
function (sum of squares of the
normalized residuals).
21. The Parameter Estimation Problem
1. Data - Photometric.
2. Model - The Lens Equation.
3. Error Function - difference between the data and the
model’s prediction for any given set of model parameters.
For example, the χ2
function (sum of squares of the
normalized residuals).
Optimization:
• Least-squares estimate - Minimize the χ2
function. Or,
• Maximum likelihood estimate - Maximize the likelihood
function, L, which can be approximated by,
L ∝ exp −
χ2
2
22. The Data - Photometric
MOA, OGLE and KMTNet microlensing groups monitor
hundreds of millions of stars in the Galactic Bulge.
Microlensing photometric data is obtained via Difference
Image Analysis (DIA).
23. The Data - Photometric
More datasets from follow-up groups with their narrow field
telescopes.
24. The Model - Lens Equation
Maps image positions to source positions.
We know the source positions and need to
find the image positions to compute the
amplifications.
Amplification is the area of the images relative
to the area of the source.
25. Microlensing Modelling Challenges
The lens equation:
Is a (N2
+ 1)th
degree complex polynomial for N lenses;
5th
degree for a binary-lens, 10th
for a triple lens.
Cannot compute the finite-source effect directly.
Is undefined for caustic curves.
Caustic curves are regions of infinite amplification
(theoretical) for a point source and very high
amplification for a finite source.
26. Microlensing Modelling Challenges
Higher Order Effects
Finite source effects.
• Prominent in most microlensing events so cannot be
overlooked.
• Images are not points but disjointed areas which can only be
found numerically.
Parallax, xallarap, orbital motion of lens objects.
Typical binary-lens with higher order effects has 9 or
more parameters: µ0, t0, tE , ε, d, α, ρ, ω, π .
27. Microlensing Modelling Challenges
Higher Order Effects
Finite source effects.
• Prominent in most microlensing events so cannot be
overlooked.
• Images are not points but disjointed areas which can only be
found numerically.
Parallax, xallarap, orbital motion of lens objects.
Typical binary-lens with higher order effects has 9 or
more parameters: µ0, t0, tE , ε, d, α, ρ, ω, π .
Numerical solutions required!
28. Microlensing Modelling Code for Binary-lens System
GPU-accelerated binary-lens modelling code developed by
Joe Ling, Massey University.
29. Microlensing Modelling Code for Binary-lens System
GPU-accelerated binary-lens modelling code developed by
Joe Ling, Massey University.
Magnification Map Technique
Dynamic Light Curve Engine*.
30. Microlensing Modelling Code for Binary-lens System
GPU-accelerated binary-lens modelling code developed by
Joe Ling, Massey University.
Magnification Map Technique
Dynamic Light Curve Engine*.
* Recently acquired from Joe.
31. Magnification Map
= 0.57, d = 0.9
Magnification maps are
2-D array of solutions
to the lens equation.
Represents the
parameter space { , d}.
Each pixel represents
the amplification of
the source star at a
point in time.
32. Magnification Map Caustic Curve Patterns
Single Lens
The caustic is a point (the position of the lens).
= 0, d = 0
Brighter color - higher amplification
33. Magnification Map Caustic Curve Patterns
Binary Lenses
The caustic curve patterns are more complicated.
= 0.5, d = 0.5
34. Magnification Map Caustic Curve Patterns
Binary Lenses
The caustic curve patterns are more complicated.
= 0.5, d = 0.5
= 0.1, d = 2.0
35. Magnification Map with Source Trajectory
A model
trajectory of the
source star.
Represents
{t0, tE , u0, α, ρ }.
Produces a unique
light curve.
36. Magnification Map Generation - Inverse Ray Shooting
Billions of rays are shot backwards from the observer, through
the lens onto the source plane.
37. Magnification Map Generation -Inverse Ray Shooting
Image Area Calculation
Lens plane is divided into a rectangular grid and rays shot
from the 4 corners of each grid cell.
Image Credit: (Ling,2013)
38. Magnification Map Generation - Inverse Ray Shooting
Billions of rays are shot evenly from image area onto the source
plane to determine the magnification of each of its pixel.
Image Credit: (Ling,2013)
39. Magnification Maps - Advantages
Finite source amplification can be computed directly by
integrating the source area on the magnification map.
Amplifications on caustic curve regions can be determined.
40. Caustic Curve Diagrams and Finite Source Effect
= 0.5, d = 0.5
ρ = 0.001
Source crosses caustic curve at two points.
41. Caustic Curve Diagrams and Finite Source Effect
= 0.5, d = 0.5
ρ = 0.001
Source crosses caustic curve at two points.
43. Caustic Curve Diagrams and Finite Source Effect
= 0.5, d = 0.5
ρ = 0.02
Finite source effect - peaks appear washed out.
44. Magnification Maps - Advantages
Multiple light curves can be extracted from the same
magnification map.
α
µ0
45. Magnification Maps - Advantages
Multiple light curves can be extracted from the same
magnification map.
α
µ0
Light curves corresponding to the source trajectories.
46. Magnification Map Technique
Grid search coupled with downhill simplex
optimization method.
Hundreds of thousands of light curves are extracted from
thousands of magnification maps to find rough initial
models.
However, Magnification Map Technique cannot be used
when we want to:
• Optimize ε and d as free parameters during a Markov Chain
Monte Carlo (MCMC) run.
• Account for orbital motion whereby projected distance, d
changes for each source position in time.
47. Magnification Map Technique
Grid search coupled with downhill simplex
optimization method.
Hundreds of thousands of light curves are extracted from
thousands of magnification maps to find rough initial
models.
However, Magnification Map Technique cannot be used
when we want to:
• Optimize ε and d as free parameters during a Markov Chain
Monte Carlo (MCMC) run.
• Account for orbital motion whereby projected distance, d
changes for each source position in time.
→ Dynamic Light Curve Engine.
49. Dynamic Light-Curve Engine
Computes the amplification value “on the fly”;
bypassing magnification map generation.
“Image-centred” inverse ray shooting.
The difference: rays are shot to the source star disk,
not to entire source plane.
50. Dynamic Light-Curve Engine
Computes the amplification value “on the fly”;
bypassing magnification map generation.
“Image-centred” inverse ray shooting.
The difference: rays are shot to the source star disk,
not to entire source plane.
MCMC optmization is used for finding more accurate
models.
51. Dynamic Light Curve Engine: Image-centred IRS
Image Area Calculation
Solve complex polynomial to find point image positions
(active cells).
Recursively check if the neighbouring cells are active.
52. Dynamic Light Curve Engine: Image-centred IRS
Shoot equal density of rays from the image area for
each source position in time .
Relative amplification - number of collected rays
inside the source star.
53. Microlensing Modelling Objectives
The Parameter Estimation Problem.
• Magnification Map Technique with Grid Search and Downhill
Simplex.
• Dynamic Light Curve Engine with MCMC.
54. Microlensing Modelling Objectives
The Parameter Estimation Problem.
• Magnification Map Technique with Grid Search and Downhill
Simplex.
• Dynamic Light Curve Engine with MCMC.
The Model Selection Problem?
55. The Microlensing Model Selection Problem
Choose between multiple competing models with comparable
χ2
values.
56. The Microlensing Model Selection Problem
Choose between multiple competing models with comparable
χ2
values.
Two different models specified by different number of
parameters:
• Binary lens or triple lens?
• Static binary-lens or with orbital motion?
57. The Microlensing Model Selection Problem
Choose between multiple competing models with comparable
χ2
values.
Two different models specified by different number of
parameters:
• Binary lens or triple lens?
• Static binary-lens or with orbital motion?
One specific model produces two comparable modes
with different sets of parameter values.
58. Example 1. - Microlensing Model Selection Problem
Solution 1: A star and planet lens system with orbital
motion.
Solution 2: Static binary star lens system.
59. Example 2. - Microlensing Model Selection Problem
OGLE-2004-BLG-490 : Binary-lens model with 7 parameters
= 0.04, d = 1.43, ρ = 0.001, α = 6.02, t0 = 3224.0, tE = 14.9, µ0 = 0.22
60. Example 2. - Microlensing Model Selection Problem
OGLE-2004-BLG-490 : Binary-lens model with 7 parameters
= 0.11, d = 1.71, ρ = 0.08, α = −0.01, t0 = 3225.3, tE = 12.7, µ0 = 0.32
61. Example 2. - Microlensing Model Selection Problem
OGLE-2004-BLG-490 : Binary-lens model with 7 parameters
Which one would you choose as the most probable light curve?
LC 1 LC 2
χ2
= 933 χ2
= 957
Using the “chi-square test for goodness of fit” method,
one would choose LC 1 as the most probable light curve
despite a chunk of it being unsupported by data points.
62. Example 3. - Microlensing Model Selection Problem
OGLE-2007-BLG-472
The global χ2
minimum had to be rejected because they
implied physically implausible parameters.
Kains et al.,2012
63. Chi-square Test for Goodness of Fit
Weaknesses in the least squares or equivalently
maximum likelihood approach:
Can lead us to over-parametrized models.
Occam’s Razor is not quantified : Is a simpler model
always better? Not if the complexity of the data justifies
a more complex model!
Can lead us to choose a sub-optimal mode.
There might be a need to reject the lowest χ2
models on
the basis of physical implausibility.
64. The Bayesian Approach to Model Selection
Model selection is a difficult task because we
can not simply choose the model that best fits
the data.
The Bayesian approach offers a much more
powerful way of comparing models.
65. The Bayesian Evidence for Model Selection
Bayes’ Theorem:
Posterior × Evidence = Likelihood × Prior (1)
66. The Bayesian Evidence for Model Selection
Bayes’ Theorem:
Posterior × Evidence = Likelihood × Prior (1)
Evidence:
Normalizes the posterior as a probability distribution over
all the parameters.
Quantitative “evidence” in favour of one model over
another.
Naturally implements Occam’s razor and guards against
over-fitting.
Computationally expensive but crucial for model selection
problems.
67. Bayesian Model Selection via Baye’s Factor
Given two models M1 and M2 we can decide which one is
favoured by simply computing Bayes’ Factor, the ratio of the
model evidences:
K =
Z1
Z2
(2)
A value of K > 1 means that M1 is more strongly supported
by the data under consideration than M2.
68. Nested Sampling - Model Selection & Parameter Estimation
Monte Carlo optimization method developed by John
Skilling, 2004.
69. Nested Sampling - Model Selection & Parameter Estimation
Monte Carlo optimization method developed by John
Skilling, 2004.
Performs straightforward model comparison by
direct computations of the Bayesian evidence.
70. Nested Sampling - Model Selection & Parameter Estimation
Monte Carlo optimization method developed by John
Skilling, 2004.
Performs straightforward model comparison by
direct computations of the Bayesian evidence.
Achieves simultaneous Bayesian model selection and
Bayesian parameter estimation as a by-product.
71. Nested Sampling
Image credit: Feroz et al., 2013
A population of points are randomly sampled. For iteration,
i, the point with lowest likelihood value, Li , is removed
from the live point set and replaced by another point drawn
from the prior under the constraint that its likelihood is
higher than Li
72. MultiNest
Nested sampling based algorithm, introduced by Feroz,
Hobson and Bridges.
Explores multi-modal and moderately multi-dimensional
parameter space successfully.
Active region nests inwards as the prior domain gets restricted
by the minimum likelihood condition.
74. PyMultiNest
The MultiNest sampling engine has a python interface -
PyMultiNest, written by Johannes Buchner.
Two main input functions:
• Prior.
• Log-likelihood.
75. PyMultiNest - Prior
Prior:
Needs to transform the native parameter space uniformly
distributed in [0, 1] to physical parameters specific to the
problem.
76. PyMultiNest - Prior
Prior:
Needs to transform the native parameter space uniformly
distributed in [0, 1] to physical parameters specific to the
problem.
Prior function
78. PyMultiNest Outputs
Outputs:
• Maximum a posterior (MAP) parameters of all the modes
found.
• Local log-evidences of all the modes found and the global
log-evidence.
Straightforward model comparison by taking the ratio of
the log-evidences.
79. Summary
GPU-accelerated code - fast and efficient parameter
estimation.
Magnification Map Technique.
• Grid-search with downhill simplex optimization method - for
rough initial models.
Dynamic Light Curve Engine.
• Orbital motion modelling enabled.
• Mcmc optimization method - for more accurate models.
80. Summary
GPU-accelerated code - fast and efficient parameter
estimation.
Magnification Map Technique.
• Grid-search with downhill simplex optimization method - for
rough initial models.
Dynamic Light Curve Engine.
• Orbital motion modelling enabled.
• Mcmc optimization method - for more accurate models.
Nested Sampling Method for Model Selection.
82. Primary Research Goals
Solve the microlensing model selection problem using nested
sampling optimization.
Write code to enable MultiNest optimization with the
Dynamic Light Curve Engine.
83. Primary Research Goals
Solve the microlensing model selection problem using nested
sampling optimization.
Write code to enable MultiNest optimization with the
Dynamic Light Curve Engine.
Test and validate code by comparison with published
microlensing results.
84. Primary Research Goals
Solve the microlensing model selection problem using nested
sampling optimization.
Write code to enable MultiNest optimization with the
Dynamic Light Curve Engine.
Test and validate code by comparison with published
microlensing results.
Model current microlensing events.