Malina Kirn's 2011-09-06 University of Maryland Scientific Computation dissertation defense. Using neural networks and grid computing to measure top quark pair production cross section at the Compact Muon Solenoid detector at the Large Hadron Collider.
The modern power system around the world has grown in complexity of interconnection and
power demand. The focus has shifted towards enhanced performance, increased customer focus,
low cost, reliable and clean power. In this changed perspective, scarcity of energy resources,
increasing power generation cost, environmental concern necessitates optimal economic dispatch.
In reality power stations neither are at equal distances from load nor have similar fuel cost
functions. Hence for providing cheaper power, load has to be distributed among various power
stations in a way which results in lowest cost for generation. Practical economic dispatch (ED)
problems have highly non-linear objective function with rigid equality and inequality constraints.
Particle swarm optimization (PSO) is applied to allot the active power among the generating
stations satisfying the system constraints and minimizing the cost of power generated. The
viability of the method is analyzed for its accuracy and rate of convergence. The economic load
dispatch problem is solved for three and six unit system using PSO and conventional method for
both cases of neglecting and including transmission losses. The results of PSO method were
compared with conventional method and were found to be superior. The conventional
optimization methods are unable to solve such problems due to local optimum solution
convergence. Particle Swarm Optimization (PSO) since its initiation in the last 15 years has been
a potential solution to the practical constrained economic load dispatch (ELD) problem. The
optimization technique is constantly evolving to provide better and faster results.
While writing the report on our project seminar, we were wondering that Science and smart
technology are as ever expanding field and the engineers working hard day and night and make
the life a gift for us
These are the slides from the webinar hosted by NNIN @ University of Michigan on May 28, 2013. Find out more about my views on how atomic-scale modeling can help the development of nanoelectronics based on nanowires, interfaces, graphene. The special Atomistix ToolKit
In this work Predestination of Particles Wavering Search (PPS) algorithm has been applied to solve optimal reactive power problem. PPS algorithm has been modeled based on the motion of the particles in the exploration space. Normally the movement of the particle is based on gradient and swarming motion. Particles are permitted to progress in steady velocity in gradient-based progress, but when the outcome is poor when compared to previous upshot, immediately particle rapidity will be upturned with semi of the magnitude and it will help to reach local optimal solution and it is expressed as wavering movement. In standard IEEE 14, 30, 57,118,300 bus systems Proposed Predestination of Particles Wavering Search (PPS) algorithm is evaluated and simulation results show the PPS reduced the power loss efficiently.
What is Quantum Computing
What is Quantum bits (Qubit)
What is Reversible Logic gates and Logic Circuits
What is Quantum Neuron (Quron)
What are the methods of implementing ANN using Quantum computing
The modern power system around the world has grown in complexity of interconnection and
power demand. The focus has shifted towards enhanced performance, increased customer focus,
low cost, reliable and clean power. In this changed perspective, scarcity of energy resources,
increasing power generation cost, environmental concern necessitates optimal economic dispatch.
In reality power stations neither are at equal distances from load nor have similar fuel cost
functions. Hence for providing cheaper power, load has to be distributed among various power
stations in a way which results in lowest cost for generation. Practical economic dispatch (ED)
problems have highly non-linear objective function with rigid equality and inequality constraints.
Particle swarm optimization (PSO) is applied to allot the active power among the generating
stations satisfying the system constraints and minimizing the cost of power generated. The
viability of the method is analyzed for its accuracy and rate of convergence. The economic load
dispatch problem is solved for three and six unit system using PSO and conventional method for
both cases of neglecting and including transmission losses. The results of PSO method were
compared with conventional method and were found to be superior. The conventional
optimization methods are unable to solve such problems due to local optimum solution
convergence. Particle Swarm Optimization (PSO) since its initiation in the last 15 years has been
a potential solution to the practical constrained economic load dispatch (ELD) problem. The
optimization technique is constantly evolving to provide better and faster results.
While writing the report on our project seminar, we were wondering that Science and smart
technology are as ever expanding field and the engineers working hard day and night and make
the life a gift for us
These are the slides from the webinar hosted by NNIN @ University of Michigan on May 28, 2013. Find out more about my views on how atomic-scale modeling can help the development of nanoelectronics based on nanowires, interfaces, graphene. The special Atomistix ToolKit
In this work Predestination of Particles Wavering Search (PPS) algorithm has been applied to solve optimal reactive power problem. PPS algorithm has been modeled based on the motion of the particles in the exploration space. Normally the movement of the particle is based on gradient and swarming motion. Particles are permitted to progress in steady velocity in gradient-based progress, but when the outcome is poor when compared to previous upshot, immediately particle rapidity will be upturned with semi of the magnitude and it will help to reach local optimal solution and it is expressed as wavering movement. In standard IEEE 14, 30, 57,118,300 bus systems Proposed Predestination of Particles Wavering Search (PPS) algorithm is evaluated and simulation results show the PPS reduced the power loss efficiently.
What is Quantum Computing
What is Quantum bits (Qubit)
What is Reversible Logic gates and Logic Circuits
What is Quantum Neuron (Quron)
What are the methods of implementing ANN using Quantum computing
This talk was presented at the 22nd International conference on Surface Modification Technology, 22-24 September 2008, in Trollhattan, Sweden. It describes some recent computational research work carried out using molecular dynamics methods to calculate physical properties, including viscosity, of liquid nickel over a wide temperature range.
The Extraordinary World of Quantum ComputingTim Ellison
Originally presented at QCon London - 6 March-2018.
The classical computer on your lap or housed in your data centre manipulates data represented with a binary encoding -- quantum computers are different. They use atomic level mechanics to represent multiple data states simultaneously, leading to a phenomenal exponential increase in the representable state of data, and new solutions to problems that are infeasible using today's classical computers. This session assumes no prior knowledge of quantum technology and presents a introduction to the field of quantum computing, including an introduction to the quantum bit, the types of problem suited to quantum computing, a demo of running algorithms on IBM's quantum machines, and a peek into the future of quantum computers.
Introduction to Quantum Computing & Quantum Information TheoryRahul Mee
Note:This is just presentation created for study purpose.
This comprehensive introduction to the field offers a thorough exposition of quantum computing and the underlying concepts of quantum physics.
This presentation is the introduction to Density Functional Theory, an essential computational approach used by Physicist and Quantum Chemist to study Solid State matter.
Calculating transition amplitudes by variational quantum eigensolversQunaSys
This is our poster planned to be presented at APS March.
We proposed a method to calculate transition amplitudes between two orthogonal states on NISQ devices.
This work is a joint research between QunaSys and Mitsubishi Chemical Corporation.
The MSc defense ceremony was held on 6-7-2017 in Mansoura University, Faculty of Engineering. This presentation is shared to help MSc students in Faculty of Engineering prepare their thesis presentation and ease their tension before their presentation time
This talk was presented at the 22nd International conference on Surface Modification Technology, 22-24 September 2008, in Trollhattan, Sweden. It describes some recent computational research work carried out using molecular dynamics methods to calculate physical properties, including viscosity, of liquid nickel over a wide temperature range.
The Extraordinary World of Quantum ComputingTim Ellison
Originally presented at QCon London - 6 March-2018.
The classical computer on your lap or housed in your data centre manipulates data represented with a binary encoding -- quantum computers are different. They use atomic level mechanics to represent multiple data states simultaneously, leading to a phenomenal exponential increase in the representable state of data, and new solutions to problems that are infeasible using today's classical computers. This session assumes no prior knowledge of quantum technology and presents a introduction to the field of quantum computing, including an introduction to the quantum bit, the types of problem suited to quantum computing, a demo of running algorithms on IBM's quantum machines, and a peek into the future of quantum computers.
Introduction to Quantum Computing & Quantum Information TheoryRahul Mee
Note:This is just presentation created for study purpose.
This comprehensive introduction to the field offers a thorough exposition of quantum computing and the underlying concepts of quantum physics.
This presentation is the introduction to Density Functional Theory, an essential computational approach used by Physicist and Quantum Chemist to study Solid State matter.
Calculating transition amplitudes by variational quantum eigensolversQunaSys
This is our poster planned to be presented at APS March.
We proposed a method to calculate transition amplitudes between two orthogonal states on NISQ devices.
This work is a joint research between QunaSys and Mitsubishi Chemical Corporation.
The MSc defense ceremony was held on 6-7-2017 in Mansoura University, Faculty of Engineering. This presentation is shared to help MSc students in Faculty of Engineering prepare their thesis presentation and ease their tension before their presentation time
In proton-proton collisions, particle cones are produced which are called jets. Jets are very important in particle physics because they contain information about quarks and gluons. However, the energy of the jets that is read from detectors like the CMS does not correspond to the real value. Corrections must be made to obtain the true particle-level energy. In these slides, I talk a little about the jet energy corrections used in the CMS Experiment.
APPLICATION OF PARTICLE SWARM OPTIMIZATION TO MICROWAVE TAPERED MICROSTRIP LINEScseij
Application of metaheuristic algorithms has been of continued interest in the field of electrical engineering because of their powerful features. In this work special design is done for a tapered transmission line used for matching an arbitrary real load to a 50Ω line. The problem at hand is to match this arbitray load to 50 Ω line using three section tapered transmission line with impedances in decreasing order from the load. So the problem becomes optimizing an equation with three unknowns with various conditions. The optimized values are obtained using Particle Swarm Optimization. It can easily be shown that PSO is very strong in solving this kind of multiobjective optimization problems.
Application of particle swarm optimization to microwave tapered microstrip linescseij
Application of metaheuristic algorithms has been of continued interest in the field of electrical engineering
because of their powerful features. In this work special design is done for a tapered transmission line used
for matching an arbitrary real load to a 50Ω line. The problem at hand is to match this arbitray load to 50
Ω line using three section tapered transmission line with impedances in decreasing order from the load. So
the problem becomes optimizing an equation with three unknowns with various conditions. The optimized
values are obtained using Particle Swarm Optimization. It can easily be shown that PSO is very strong in
solving this kind of multiobjective optimization problems.
Extraction of photovoltaic generator parameters through combination of an an...IJECEIAES
In the present work, we propose an improved method based on a combination of an analytical and iterative approach to extract the photovoltaic (PV) module parameters using the measured current-voltage characteristics and the simple diode model. First, we calculate the series resistance using a set of analytical formulas for the base values of the three current-voltage curves. Then, the three other parameters are analytically expressed as functions of serial resistance and ideality factor based on the linear least-squares method. Finally, the ideality factor is calculated applying an iterative algorithm to minimize the normalized root mean square error (NRMSE) value. The proposed method was validated with a real experimental set of two PV generators, which showed the best fit to the I-V curve. Moreover, the proposed method needs only the initial value of the ideality factor.
This is the draft slides we use for DAC 2014 presentation.
Abstract: We proposed MATEX, a distributed framework for transient simulation of power distribution networks (PDNs). MATEX utilizes matrix exponential kernel with Krylov subspace approximations to solve differential equations of linear circuit. First, the whole simulation task is divided into subtasks based on decompositions of current sources, in order to reduce the computational overheads. Then these subtasks are distributed to different computing nodes and processed in parallel. Within each node, after the matrix factorization at the beginning of simulation, the adaptive time stepping solver is performed without extra matrix re-factorizations. MATEX overcomes the stiffness hinder of previous matrix exponential-based circuit simulator by rational Krylov subspace method, which leads to larger step sizes with smaller dimensions of Krylov subspace bases and highly accelerates the whole computation. MATEX outperforms both traditional fixed and adaptive time stepping methods, e.g., achieving around 13X over the trapezoidal framework with fixed time step for the IBM power grid benchmarks.
Linear regression [Theory and Application (In physics point of view) using py...ANIRBANMAJUMDAR18
Machine-learning models are behind many recent technological advances, including high-accuracy translations of the text and self-driving cars. They are also increasingly used by researchers to help in solving physics problems, like Finding new phases of matter, Detecting interesting outliers
in data from high-energy physics experiments, Founding astronomical objects are known as gravitational lenses in maps of the night sky etc. The rudimentary algorithm that every Machine Learning enthusiast starts with is a linear regression algorithm. In statistics, linear regression is a linear approach to modelling the relationship between a scalar response (or dependent variable) and one or more explanatory variables (or independent
variables). Linear regression analysis (least squares) is used in a physics lab to prepare the computer-aided report and to fit data. In this article, the application is made to experiment: 'DETERMINATION OF DIELECTRIC CONSTANT OF NON-CONDUCTING LIQUIDS'. The entire computation is made through Python 3.6 programming language in this article.
Big Fast Data in High-Energy Particle PhysicsAndrew Lowe
Experiments at CERN (the European Organization for Nuclear Research) generate colossal amounts of data. Physicists must sift through about 30 petabytes of data produced annually in their search for new particles and interesting physics. The tidal wave of data produced by the Large Hadron Collider (LHC) at CERN places an unprecedented challenge for experiments' data acquisition systems, and it is the need to select rare physics processes with high efficiency while rejecting high-rate background processes that drives the architectural decisions and technology choices. Although filtering and managing large data sets is of course not exclusive to particle physics, the approach that has been taken is somewhat unique. In this talk, I describe the typical journey taken by data from the readout electronics of one experiment to the results of a physics analysis.
ESTIMATION OF THE PARAMETERS OF SOLAR CELLS FROM CURRENT-VOLTAGE CHARACTERIST...ijscai
This paper presents a method for calculating the light generated current, the series resistance, shun
resistance and the two components of the reverse saturation current usually encountered in the double
diode representation of the solar cell from the experimental values of the current-voltage characteristics
of the cell using genetic algorithm. The theory is able to regenerate the above mentioned parameters to
very good accuracy when applied to cell data that was generated from pre-defined parameters. The
method is applied to various types of space quality solar cells and sub cells. All parameters except the
light generated current are seen to be nearly the same in the case of a cell whose characteristics under
illumination and in dark were analyzed. The light generated current is nearly equal to the short- circuit
current in all cases. The parameters obtained by this method and another method are nearly equal
wherever applicable. The parameters are also shown to represent the current-voltage characteristics
well.
Estimation Of The Parameters Of Solar Cells From Current-Voltage Characterist...IJSCAI Journal
This paper presents a method for calculating the light generated current, the series resistance, shun
resistance and the two components of the reverse saturation current usually encountered in the double
diode representation of
the solar cell from the experimental values of the current
-
voltage characteristics
of the cell using genetic algorithm. The theory is able to regenerate the above mentioned parameters to
very good accuracy when applied to cell data that was generated from
pre
-
defined parameters. The
method is applied to various types of space quality solar cells and sub cells. All parameters except the
light generated current are seen to be nearly the same in the case of a cell whose characteristics under
illumination and i
n dark were analyzed. The light generated current is nearly equal to the short
-
circuit
current in all cases. The parameters obtained by this method and another method are nearly equal
wherever applicable. The parameters are also shown to represent the cur
rent
-
voltage characteristics
well
ESTIMATION OF THE PARAMETERS OF SOLAR CELLS FROM CURRENT-VOLTAGE CHARACTERIST...ijscai
This paper presents a method for calculating the light generated current, the series resistance, shun
resistance and the two components of the reverse saturation current usually encountered in the double
diode representation of the solar cell from the experimental values of the current-voltage characteristics
of the cell using genetic algorithm. The theory is able to regenerate the above mentioned parameters to
very good accuracy when applied to cell data that was generated from pre-defined parameters. The
method is applied to various types of space quality solar cells and sub cells. All parameters except the
light generated current are seen to be nearly the same in the case of a cell whose characteristics under
illumination and in dark were analyzed. The light generated current is nearly equal to the short- circuit
current in all cases. The parameters obtained by this method and another method are nearly equal
wherever applicable. The parameters are also shown to represent the current-voltage characteristics
well.
Presentacion Bienal Española de Física 2005 "Combined TestBeam a muy bajo pt"CARMEN IGLESIAS
En el año 2004, la colaboración ATLAS ha estado implicada en un Test Combinado con haces de partículas, llamado “Combined Test Beam”(CTB). Una sección completa del barril del detector con los calorímetros EM y HAD y las “end-cap” del detector de muones han sido probadas. Una sección del experimento del ATLAS (fig. 1) se ha probado con haces de diversas partículas (e-, -, , protones y fotones) en diversas energías y polaridades, de 1 hasta 350 GeV, proporcionando una oportunidad única de evaluar el funcionamiento individual de los sub-detectores, pero también de explotar el poder de ATLAS para la identificación y medida de las partículas . Para este análisis se han usado los datos del CBT a muy baja energía (1-9 GeV) a =0.35, con información de ambos calorímetros (EM+HAD) e información de las trazas procedente del TRT (el sistema de Píxel no funcionaba). Las muestras de 100.000 eventos contienen una mezcla de e-, - y y fueron reconstruidas aplicando la versión 9.1.2 de Athena2 (el software offline de ATLAS).
Electromagnetic Levitation (control project)Salim Al Oufi
This project is about controlling the position of a magnetic ball by using PID controller. The ball position controlled by changing the magnetic field from the solenoid. Sensors give a feedback of the position of the magnetic ball. This is the simplest description for this paper.
Richard's entangled aventures in wonderlandRichard Gill
Since the loophole-free Bell experiments of 2020 and the Nobel prizes in physics of 2022, critics of Bell's work have retreated to the fortress of super-determinism. Now, super-determinism is a derogatory word - it just means "determinism". Palmer, Hance and Hossenfelder argue that quantum mechanics and determinism are not incompatible, using a sophisticated mathematical construction based on a subtle thinning of allowed states and measurements in quantum mechanics, such that what is left appears to make Bell's argument fail, without altering the empirical predictions of quantum mechanics. I think however that it is a smoke screen, and the slogan "lost in math" comes to my mind. I will discuss some other recent disproofs of Bell's theorem using the language of causality based on causal graphs. Causal thinking is also central to law and justice. I will mention surprising connections to my work on serial killer nurse cases, in particular the Dutch case of Lucia de Berk and the current UK case of Lucy Letby.
Cancer cell metabolism: special Reference to Lactate PathwayAADYARAJPANDEY1
Normal Cell Metabolism:
Cellular respiration describes the series of steps that cells use to break down sugar and other chemicals to get the energy we need to function.
Energy is stored in the bonds of glucose and when glucose is broken down, much of that energy is released.
Cell utilize energy in the form of ATP.
The first step of respiration is called glycolysis. In a series of steps, glycolysis breaks glucose into two smaller molecules - a chemical called pyruvate. A small amount of ATP is formed during this process.
Most healthy cells continue the breakdown in a second process, called the Kreb's cycle. The Kreb's cycle allows cells to “burn” the pyruvates made in glycolysis to get more ATP.
The last step in the breakdown of glucose is called oxidative phosphorylation (Ox-Phos).
It takes place in specialized cell structures called mitochondria. This process produces a large amount of ATP. Importantly, cells need oxygen to complete oxidative phosphorylation.
If a cell completes only glycolysis, only 2 molecules of ATP are made per glucose. However, if the cell completes the entire respiration process (glycolysis - Kreb's - oxidative phosphorylation), about 36 molecules of ATP are created, giving it much more energy to use.
IN CANCER CELL:
Unlike healthy cells that "burn" the entire molecule of sugar to capture a large amount of energy as ATP, cancer cells are wasteful.
Cancer cells only partially break down sugar molecules. They overuse the first step of respiration, glycolysis. They frequently do not complete the second step, oxidative phosphorylation.
This results in only 2 molecules of ATP per each glucose molecule instead of the 36 or so ATPs healthy cells gain. As a result, cancer cells need to use a lot more sugar molecules to get enough energy to survive.
Unlike healthy cells that "burn" the entire molecule of sugar to capture a large amount of energy as ATP, cancer cells are wasteful.
Cancer cells only partially break down sugar molecules. They overuse the first step of respiration, glycolysis. They frequently do not complete the second step, oxidative phosphorylation.
This results in only 2 molecules of ATP per each glucose molecule instead of the 36 or so ATPs healthy cells gain. As a result, cancer cells need to use a lot more sugar molecules to get enough energy to survive.
introduction to WARBERG PHENOMENA:
WARBURG EFFECT Usually, cancer cells are highly glycolytic (glucose addiction) and take up more glucose than do normal cells from outside.
Otto Heinrich Warburg (; 8 October 1883 – 1 August 1970) In 1931 was awarded the Nobel Prize in Physiology for his "discovery of the nature and mode of action of the respiratory enzyme.
WARNBURG EFFECT : cancer cells under aerobic (well-oxygenated) conditions to metabolize glucose to lactate (aerobic glycolysis) is known as the Warburg effect. Warburg made the observation that tumor slices consume glucose and secrete lactate at a higher rate than normal tissues.
Slide 1: Title Slide
Extrachromosomal Inheritance
Slide 2: Introduction to Extrachromosomal Inheritance
Definition: Extrachromosomal inheritance refers to the transmission of genetic material that is not found within the nucleus.
Key Components: Involves genes located in mitochondria, chloroplasts, and plasmids.
Slide 3: Mitochondrial Inheritance
Mitochondria: Organelles responsible for energy production.
Mitochondrial DNA (mtDNA): Circular DNA molecule found in mitochondria.
Inheritance Pattern: Maternally inherited, meaning it is passed from mothers to all their offspring.
Diseases: Examples include Leber’s hereditary optic neuropathy (LHON) and mitochondrial myopathy.
Slide 4: Chloroplast Inheritance
Chloroplasts: Organelles responsible for photosynthesis in plants.
Chloroplast DNA (cpDNA): Circular DNA molecule found in chloroplasts.
Inheritance Pattern: Often maternally inherited in most plants, but can vary in some species.
Examples: Variegation in plants, where leaf color patterns are determined by chloroplast DNA.
Slide 5: Plasmid Inheritance
Plasmids: Small, circular DNA molecules found in bacteria and some eukaryotes.
Features: Can carry antibiotic resistance genes and can be transferred between cells through processes like conjugation.
Significance: Important in biotechnology for gene cloning and genetic engineering.
Slide 6: Mechanisms of Extrachromosomal Inheritance
Non-Mendelian Patterns: Do not follow Mendel’s laws of inheritance.
Cytoplasmic Segregation: During cell division, organelles like mitochondria and chloroplasts are randomly distributed to daughter cells.
Heteroplasmy: Presence of more than one type of organellar genome within a cell, leading to variation in expression.
Slide 7: Examples of Extrachromosomal Inheritance
Four O’clock Plant (Mirabilis jalapa): Shows variegated leaves due to different cpDNA in leaf cells.
Petite Mutants in Yeast: Result from mutations in mitochondrial DNA affecting respiration.
Slide 8: Importance of Extrachromosomal Inheritance
Evolution: Provides insight into the evolution of eukaryotic cells.
Medicine: Understanding mitochondrial inheritance helps in diagnosing and treating mitochondrial diseases.
Agriculture: Chloroplast inheritance can be used in plant breeding and genetic modification.
Slide 9: Recent Research and Advances
Gene Editing: Techniques like CRISPR-Cas9 are being used to edit mitochondrial and chloroplast DNA.
Therapies: Development of mitochondrial replacement therapy (MRT) for preventing mitochondrial diseases.
Slide 10: Conclusion
Summary: Extrachromosomal inheritance involves the transmission of genetic material outside the nucleus and plays a crucial role in genetics, medicine, and biotechnology.
Future Directions: Continued research and technological advancements hold promise for new treatments and applications.
Slide 11: Questions and Discussion
Invite Audience: Open the floor for any questions or further discussion on the topic.
THE IMPORTANCE OF MARTIAN ATMOSPHERE SAMPLE RETURN.Sérgio Sacani
The return of a sample of near-surface atmosphere from Mars would facilitate answers to several first-order science questions surrounding the formation and evolution of the planet. One of the important aspects of terrestrial planet formation in general is the role that primary atmospheres played in influencing the chemistry and structure of the planets and their antecedents. Studies of the martian atmosphere can be used to investigate the role of a primary atmosphere in its history. Atmosphere samples would also inform our understanding of the near-surface chemistry of the planet, and ultimately the prospects for life. High-precision isotopic analyses of constituent gases are needed to address these questions, requiring that the analyses are made on returned samples rather than in situ.
Richard's aventures in two entangled wonderlandsRichard Gill
Since the loophole-free Bell experiments of 2020 and the Nobel prizes in physics of 2022, critics of Bell's work have retreated to the fortress of super-determinism. Now, super-determinism is a derogatory word - it just means "determinism". Palmer, Hance and Hossenfelder argue that quantum mechanics and determinism are not incompatible, using a sophisticated mathematical construction based on a subtle thinning of allowed states and measurements in quantum mechanics, such that what is left appears to make Bell's argument fail, without altering the empirical predictions of quantum mechanics. I think however that it is a smoke screen, and the slogan "lost in math" comes to my mind. I will discuss some other recent disproofs of Bell's theorem using the language of causality based on causal graphs. Causal thinking is also central to law and justice. I will mention surprising connections to my work on serial killer nurse cases, in particular the Dutch case of Lucia de Berk and the current UK case of Lucy Letby.
Multi-source connectivity as the driver of solar wind variability in the heli...Sérgio Sacani
The ambient solar wind that flls the heliosphere originates from multiple
sources in the solar corona and is highly structured. It is often described
as high-speed, relatively homogeneous, plasma streams from coronal
holes and slow-speed, highly variable, streams whose source regions are
under debate. A key goal of ESA/NASA’s Solar Orbiter mission is to identify
solar wind sources and understand what drives the complexity seen in the
heliosphere. By combining magnetic feld modelling and spectroscopic
techniques with high-resolution observations and measurements, we show
that the solar wind variability detected in situ by Solar Orbiter in March
2022 is driven by spatio-temporal changes in the magnetic connectivity to
multiple sources in the solar atmosphere. The magnetic feld footpoints
connected to the spacecraft moved from the boundaries of a coronal hole
to one active region (12961) and then across to another region (12957). This
is refected in the in situ measurements, which show the transition from fast
to highly Alfvénic then to slow solar wind that is disrupted by the arrival of
a coronal mass ejection. Our results describe solar wind variability at 0.5 au
but are applicable to near-Earth observatories.
5. The Standard Model
Sep 6 2011
5
Describes the interactions of
matter via 3 of the 4
fundamental forces.
Matter (and anti-matter):
Three generations
Leptons
Quarks (observed as jets)
Forces:
Electromagnetic (𝛾)
Weak nuclear (𝑊, 𝑍)
Strong nuclear (𝑔)
Does not include gravity Image source: LiveScience
7. Jets
Sep 6 2011
7
The strong force regulates particles with „color‟ charge.
The strong force carrier, the gluon, possesses color charge.
Therefore, the strong force does not decrease with distance.
As quarks and gluons propagate further apart, it becomes energetically
favorable to create color neutral hadrons by pulling quark-antiquark pairs
from the vacuum.
Upshot: Quarks and gluons are never observed directly in particle
detectors. They shower into a „jet‟ of hadrons. This is difficult to simulate.
Jet of color
neutral hadronsImage source:
Homer Wolfe dissertation
Quark
or gluon
8. Units & energy
Sep 6 2011
8
The electron-volt (eV) is our unit of energy. 1 eV is the energy
required to move a single electron „up‟ a one volt potential „hill.‟
Einstein‟s 𝐸 = 𝛾𝑚𝑐2
can be written as 𝐸2
= 𝑚2
𝑐2
+ 𝑝2
𝑐4
, where
𝐸=energy, 𝑚=rest mass, and 𝑝=momentum.
We use 𝑐 = 1, so 𝑬 𝟐
= 𝒎 𝟐
+ 𝒑 𝟐
.
Energy, mass, and momentum are therefore all in units of eV.
Protons at the LHC are collided with a total energy (center of mass
energy) of 7 TeV, or 7x1012 eV.
Most particles have mass much less than 7 TeV.
The top quark has a mass of 172 GeV, or 1.72x1011 eV, around the
mass of a tungsten atom.
9. Top quark production & decay
Sep 6 2011
9
𝒈
𝒈
𝒈
𝒈
𝒒
𝒒
𝒕
𝒕
𝒕
𝒕
𝒕 𝒕
Top pair production:
70% gluon induced in
7 TeV pp collisions
Top quark decay lifetime:
∝
𝟏
𝒎 𝒕
𝟑
, 𝓞 𝟏𝟎−𝟐𝟓
𝐬𝐞𝐜
Tops decay before they hadronize:
𝒕 → 𝑾𝒃, 𝑾 → 𝒋𝒋 𝒐𝒓 𝒍𝝊
hadronic
44%
di-
lepton
(not τ)
5%
µ+jets
15%
τ+x
21%
e+jets
15%
all jets
44%
10. Production cross section
Sep 6 2011
10
Related to the probability that an event will occur.
Units of area, barn: 1b=10-24 cm2
Hydrogen atom has cross section of 𝒪(10-20) cm2
Hydrogen nucleus has cross section of 𝒪(10-26) cm2
𝑡𝑡 production cross section in 7 TeV pp collisions should be 157 pb
according to a Standard Model calculation.
Luminosity is related to the “brightness” of the particle source (the
LHC), measured in units of inverse area per second: cm-2s-1.
This analysis uses 36 pb-1 integrated luminosity, ℒ𝑑𝑡.
𝑁 = 𝐴𝜖𝜎 ℒ𝑑𝑡
𝐴𝜖=acceptance*efficiency
𝜎=cross section
ℒ=luminosity, or instantaneous luminosity
18. Services
18
Computing sites provide a Compute Element (CE), Storage Element
(SE), or both. OSG CEs provide Globus services and SEs provide SRM.
Users authenticate via certificate/proxy private key infrastructure.
Sites publish availability and specs to grid database, BDII.
CMS tracks dataset metadata and location in DBS/DLS.
CMS
19. Software
19
User data analysis jobs are sent to the site hosting the dataset via CRAB.
CRAB supports several schedulers, including gLite-UI and Condor-glideIn.
Physics requests for simulations are regularly compiled and selected for
production. Several ProdAgent instances manage all production at a group of
sites.
Datasets are transferred between sites with PhEDEx. PhEDEx is based on an
agent-blackboard architecture - independent software agents schedule and
perform transfers that are tracked in the PhEDEx database.
20. Tiers
Sep 6 2011
20
Tier-0
Tier-1 Tier-1
Tier-2 Tier-2 Tier-2 Tier-2
Tier-3 Tier-3 Tier-3 Tier-3
Tier-1
Unpredictable communication Predictable communication
Load balancing
User requests for data & simulations
Data distribution
Official simulations
Event reconstruction
Stores complete dataset
Reprocessing
Stores complete dataset
Official simulation
User analysis & simulation
User driven storage,
analysis & simulation
24. Muon reconstruction
Particle track reconstruction:
hypothesis = helix
radius, 𝑓(𝑝 𝑇)
displacement from the origin, 𝑑0
„coil separation‟, 𝑓(𝑝 𝑧)
data = tracking „hits‟
location of particle in layers of tracker
uncertainty is partially a function of sensor
size
seed (first hypothesis)
inner tracker: three hits or two
hits+beamspot
muon chambers: hits that form track
segments in a large chamber
Muon:
global
start with track in muon chambers
search for matching inner track
efficient at high 𝑝 𝑇
tracker
start with inner track
search for matching hits in muon
chambers
efficient at low 𝑝 𝑇
24
Kalman filter: iteratively update a hypothesis
using data with measured uncertainties
𝒛𝒙
𝒚
25. Jet reconstruction
Particle flow:
Creates the partons to be used in
the jet cone algorithm
inner tracks:
repeatedly reconstruct inner tracks
remove associated hits each time
progressively loosen quality criteria
calorimeter clusters:
seed = calorimeter cells with
energy above some threshold
add cells with energy above
another threshold to cluster if cell is
geometrically adjacent to cluster
adjust cluster energy and position
by fractionally sharing cell energy
across clusters
Jet from Anti-𝑘 𝑇 algorithm:
distance metric:
𝑑𝑖𝑗
2
= 𝑚𝑖𝑛 𝑝 𝑇,𝑖
−2
, 𝑝 𝑇,𝑗
−2
∆𝑅𝑖𝑗
2
/𝑅2
We use 𝑅 = 0.5
Make a jet when its 𝑝 𝑇
−2
is smaller
than any 𝑑𝑖𝑗.
infrared and collinear safe
25
Jet cone: iteratively add partons to jets with smallest distance metric,
create jet when distance too large
𝒛𝒙
𝒚
26. 𝑏 tags
Sep 6 2011
26
The 𝑏 quark hadronizes into a 𝐵
meson, which has a lifetime of
𝒪(10−12
) seconds.
The decay of the 𝐵 meson occurs within
the beampipe, but a resolvable
distance from the interaction point.
Jets from 𝑏 quarks can be „tagged‟ by
the presence of displaced tracks.
We require the impact parameter
significance of the 2nd track be larger
than 3.3.
Efficiency of 55% to 74% and light
fake rate of 1% to 6% (varies with jet
𝑝 𝑇 and 𝜂).
27. ≥4 jets
𝑽+jets
37%
Single top
2%
QCD
2%
≥3 jets
𝑽+jets
57%
Single top
3%
QCD
4%
Event selection
27
𝒕𝒕
Can‟t observe 𝑡𝑡 directly.
We choose to search for 𝑡𝑡 → 𝜇+jets:
𝑡𝑡 → 𝑊+
𝑏 𝑊−
𝑏 → 𝜇𝜈𝜇 𝑗𝑗𝑏𝑏
Require exactly one isolated 𝜇:
𝑝 𝑇 > 20 GeV
|𝜂| < 2.1
Δ𝑅 𝜇, 𝑗 > 0.3
𝑅𝑒𝑙𝐼𝑠𝑜 < 0.05
𝑅𝑒𝑙𝐼𝑠𝑜=(𝐸 𝑇 near 𝜇)/(𝜇 𝑝 𝑇)
Veto on an electron
Expect ≥4 jets, require ≥3:
𝑝 𝑇 > 30 GeV
|𝜂| < 2.4
We need to discriminate between 𝑡𝑡
and 𝑉+jets (𝑉 = 𝑊/𝑍).
29. Neural network
Sep 6 2011
29
Given measureables as inputs (e.g., muon 𝜂 or jet 𝑝 𝑇).
Combines the inputs using nested sums of functions:
𝑦 =
𝑓 𝑎 + 𝑏1 𝑓 𝑐 + 𝑑1 𝑓 … + 𝑑2 𝑓 … + ⋯ + 𝑏2 𝑓 … + ⋯
Outputs the discriminant, 𝑦, which takes values near 0 for
background and near 1 for signal.
Learning algorithm finds the parameters that yield the
desired 𝑦 values.
We use sigmoid function for 𝑓: 𝑓 𝑥 =
1
1+𝑒−𝑥
30. Neuron
Sep 6 2011
30
Takes as input either the:
physical measureables
output from other
neurons
Calculates 𝑣, a shifted
sum of weighted inputs.
Outputs 𝑓(𝑣).
𝑤𝑗0
𝑟
∑
f
𝑣𝑗
𝑟
𝑦𝑗
𝑟𝑦 𝑘
𝑟−1 k
𝑤𝑗𝑘
𝑟
32. Inputs
Sep 6 2011
32
Presence of a 𝑏-tagged jet
Angular separation of two
leading jets, Δ𝑅12
Position |η| of the muon
33. Neural network outputNeural network output
Neural network output Neural network output
FractionofeventsFractionofevents
FractionofeventsFractionofevents
𝒕𝒕 simulations 𝑽+jets simulations
Single top
simulations
QCD simulations
Output
Sep 6 2011
33
Two peak structure
due to 𝑏 tag boolean
We form fit
templates for signal
and background.
34. Correcting or replacing simulations using data34
Some simulations aren‟t as good as others. How do we correct or
replace them using data?
35. 𝑏 tag efficiency
Sep 6 2011
35
The 𝑏 tag boolean is an important input to the NN.
The shape is dependent on the efficiency and fake rate of
tagging jets.
Jets in selected events have 𝑏 tag efficiency of 55% to 74% and
light fake rate of 1% to 6% according to simulations.
Simulations aren‟t perfect.
The tag efficiency and fake rate are measured from
(nearly) independent data samples.
𝑆𝐹 𝑏 =
𝑒 𝑏,𝑑𝑎𝑡𝑎
𝑒 𝑏,𝑀𝐶
= 0.9 and 𝑆𝐹𝑙 =
𝑒 𝑙,𝑑𝑎𝑡𝑎
𝑒 𝑙,𝑀𝐶
= 1.06 − 1.32.
36. 𝑡𝑡 and single top
data corrected templates
Sep 6 2011
36
Apply 𝑆𝐹 𝑏 = 0.9 and 𝑆𝐹𝑙 = 1.06 − 1.32.
Line = nominal simulation. Fill = corrected.
Single top
Neural network output
Fractionofevents
Neural network output
Fractionofevents
𝒕𝒕
37. QCD (jet only events)
Sep 6 2011
37
Simulation is
difficult due to
parton showering.
Events with muon
𝑅𝑒𝑙𝐼𝑠𝑜 > 0.1 are
dominated by
QCD (97%).
Signalregion
Data driven region
38. QCD data driven inputs
Sep 6 2011
38
QCD events passing nominal selection
(𝑅𝑒𝑙𝐼𝑠𝑜 < 0.05) have similar NN input
distributions as events with reversed
muon isolation (𝑅𝑒𝑙𝐼𝑠𝑜 > 0.1).
39. 𝑉+jets
Sep 6 2011
39
Heavy flavor
content in 𝑉+jets
subject to same
uncertainties as in
QCD.
Events with a
muon and exactly
two jets are
dominated by
𝑉+jets (87%).
Signal region
Data
driven
region
40. 𝑉+jets data driven inputs
Sep 6 2011
40
𝑉+jets events passing nominal selection
(≥ 3 jets) have similar NN input
distributions as events with =2 jets.
41. Final fit templates
Sep 6 2011
41
Lines=original nominal
simulations.
Histos=final data
corrected/replaced fit
templates.
The templates will be
fit to the discriminant
calculated from data
to determine 𝑡𝑡 yield.
Neural network outputNeural network output
Neural network output Neural network output
FractionofeventsFractionofevents
FractionofeventsFractionofevents
Corrected 𝒕𝒕
simulations
2 jet data (𝑽+jets)
Corrected single
top simulations
𝑹𝒆𝒍𝑰𝒔𝒐 > 𝟎. 𝟏
data (QCD)
43. Maximum likelihood fit
Sep 6 2011
43
We assume the observed 𝑁 𝑑𝑎𝑡𝑎 data events are composed of 𝑁 𝑡𝑡
from 𝑡𝑡, 𝑁𝑡 from single top, 𝑁 𝑉 from 𝑉+jets, and 𝑁 𝑄𝐶𝐷 from QCD,
where each 𝑁 is unknown: 𝑁 𝑑𝑎𝑡𝑎 = 𝑁 𝑡𝑡 + 𝑁𝑡 + 𝑁 𝑉 + 𝑁 𝑄𝐶𝐷.
Given each probability density function, 𝑃(𝑥), over measureable 𝑥,
this assumption yields: 𝑁 𝑑𝑎𝑡𝑎 𝑃 𝑥 = 𝑁 𝑡𝑡 𝑃 𝑡𝑡 𝑥 + 𝑁𝑡 𝑃𝑡 𝑥 +
𝑁 𝑉 𝑃𝑉 𝑥 + 𝑁 𝑄𝐶𝐷 𝑃𝑄𝐶𝐷 𝑥 .
We use the output from the NN as our 𝑥.
Likelihood function: 𝐿 𝜃|𝑥 = 𝑃 𝑥𝑖|𝜃
𝑁 𝑑𝑎𝑡𝑎
𝑖=1 .
We determine 𝜃 = 𝑁 𝑡𝑡, 𝑁𝑡, 𝑁 𝑉, 𝑁 𝑄𝐶𝐷 by maximizing 𝐿.
Using 𝑁 = 𝐴𝜖𝜎 ℒ𝑑𝑡, we convert 𝑁 𝑡𝑡 into the 𝑡𝑡 cross section, 𝜎 𝑡𝑡.
44. Uncertainty
Sep 6 2011
44
𝑁 = 𝐴𝜖𝜎 ℒ𝑑𝑡 is a statement of the average number of
events we expect to observe.
Any given experiment is not expected to measure exactly
𝑁 events, due to:
Quantum Mechanics (particle interactions are statistical!)
Experimental measurement uncertainties
Underlying assumptions that could be wrong
We use pseudo-experiments to calculate how much our
measurement of 𝜎 changes in various scenarios. This is our
uncertainty.
45. Pseudo-experiments
Sep 6 2011
45
Randomly sample the NN templates from simulations.
The number of times each template is sampled varies in each pseudo-experiment.
It is Poisson varying about the expected number of events for each.
Fit the NN templates to the randomly sampled pseudo-data.
𝒕𝒕 simulations 𝑽+jets simulations
Single top
simulations
QCD simulations
46. Pseudo-results
Sep 6 2011
46
We perform 10,000
pseudo-experiments.
Indicate presence of
-3% intrinsic bias
(measure 375 𝑡𝑡
events on average,
expect 387).
Due to using data to
form the fit templates
for QCD and 𝑉+jets.
Final measurement is
corrected for bias.
Statistical uncertainty
of 10%.
𝒕𝒕 signal yield
Pseudo-experiments
47. Systematic uncertainty
Sep 6 2011
47
We relate photon counts in calorimeters to particle
energy.
What if this conversion factor is high or low?
Measured jet energies would be systematically higher
or lower than the true energy of the particle.
Change the simulations to experience a systematic
increase or decrease in jet energy.
The change in measured cross section in a systematic
scenario is the “systematic uncertainty”.
48. Summary of systematics
Sep 6 2011
48
Source Uncertainty (%)
Jet energy scale +9.7/-5.1
Jet energy resolution ±3.3
b tag efficiency +16.1/-14.7
V+b k factor +5.2/-5.6
V+c k factor +4.4/-1.8
ISR/FSR ±5.0
Q2 +6.8/-3.5
ME to PS matching +6.0/-3.0
PDF +0.6/-1.8
Combined +22.8/-18.4
Largest from b-tag
efficiency uncertainty
(𝑆𝐹 𝑏=0.900±0.135)
This uncertainty
already reduced by
half for 2011 data.
49. Summary
Sep 6 2011
49
The cross section for 𝑝𝑝 → 𝑡𝑡 production at a center of
mass energy of 7 TeV is measured using a data sample
with integrated luminosity 36.1 pb-1 collected by the
CMS detector at the LHC. The analysis is performed on
a computing grid. Events with an isolated muon and
three hadronic jets are analyzed using a multivariate
machine learning algorithm. Kinematic variables and b
tags are provided as input to the algorithm; output from
the algorithm is used in a maximum likelihood fit to
determine 𝑡𝑡 event yield. The measured cross section is
𝟏𝟓𝟏 ± 𝟏𝟓 𝒔𝒕𝒂𝒕. −𝟐𝟖
+𝟑𝟓
(𝒔𝒚𝒔𝒕. ) ± 𝟔(𝒍𝒖𝒎𝒊. ) pb.
This is in agreement with the theory predicted cross
section of 157 pb.
50. Outlook
Sep 6 2011
50
Submitted to Physics Review
D for publication.
Available statistics going
up, though measurement is
systematics limited.
Systematic uncertainties are
going down, especially with
respect to 𝑏 tags.
51. I received a lot of help from some really wonderful
people. You know who you are.
Sean, words can‟t express, so I shall just lamely say:
thank you.
Dedication
Sep 6 2011
51
59. Simulation
Sep 6 2011
59
MadGraph: matrix element
Particle interactions are fundamentally statistical.
Matrix element is related to probability that particles with given
kinematics will be produced in collision.
Physicist specifies desired initial and final particles, including +jets.
Matrix element includes all possible intermediate „paths‟.
Image source: Scholarpedia
Pythia:
Colored particles like gluons and
quarks never observed in isolation
Particle shower pulls new partons
from vacuum
Beam remnant (leftover from collided
protons)
Multiple interactions from data
60. Monte Carlo (MC) simulations
Sep 6 2011
60
Integrate a function from
𝑥1 to 𝑥2.
The analytical form is unknown,
but the value can be calculated.
The minimum, 𝑦1, and maximum,
𝑦2, values of the function in the
range [𝑥1, 𝑥2] are known or can
be approximated.
Throw random points (𝑥, 𝑦) in
the region (𝑥1, 𝑦1) → (𝑥2, 𝑦2).
Calculate the fraction, 𝐹, with
𝑦 < 𝑓(𝑥).
The integral is then
𝐹(𝑥2 − 𝑥1)(𝑦2 − 𝑦1). 𝑥1 𝑥2
𝑦1
𝑦2
62. Finding network weights
„Train‟ on simulated datasets.
Signal events are given a
training target of 1,
background events given
training target of 0.
Training goal is to minimize a
cost function. If 𝑖 = 1 … 𝑁
training events have target
𝑦(𝑖) =
0, 𝑖 = 𝑏𝑘𝑔
1, 𝑖 = 𝑠𝑖𝑔
and 𝑦(𝑖)
is the network output for
event 𝑖, then cost, 𝐶, is:
𝐶 =
1
2
𝑦 𝑖 − 𝑦(𝑖) 2𝑁
𝑖=1 .
„Propagate‟ the cost, calculated
for the network output, back in the
network by weighting the cost by
the weight between the neurons.
Minimize using steepest descent.
62
Network weights are found that tend to yield a final network output
value near 1 for signal events and near 0 for background events.
𝐶
𝑤
63. 𝑆𝐹 𝑏
Sep 6 2011
63
The 𝐵 meson decay includes a muon in 11% of 𝑏
jets, or in 20% of 𝑏 jets including 𝑏 → 𝑐.
Select events with Δ𝑅 𝜇, 𝑗𝑒𝑡 < 0.4.
Jets with muons inside are from a 𝑏 or from jet
fakes (pion decay or muon chamber punch-
through).
To get efficiency of tagging 𝑏 jets in data, the
fraction of tagged jets with muons is adjusted by
the fraction of jets with fake muons.
Done by fitting the 𝑝 𝑇
𝑟𝑒𝑙
distribution.
𝑆𝐹 𝑏 uncertainty primarily from 𝑝 𝑇
𝑟𝑒𝑙
shape and
fraction of jets with fake muons.
64. 𝑆𝐹𝑙
Sep 6 2011
64
Jets with tracks that have negative impact
parameters are nearly all from light quarks.
Change the tag algorithm to sort tracks in
opposite order (smallest impact parameter
significance first), label negative tag.
The negative tag distribution is not symmetrical
with respect to the normal tag distribution.
Calculate the ratio between the tag rate of light
jets and negative tag rate of all jets: 𝑅𝑙 =
𝑒𝑙
𝑀𝐶
𝑒−
𝑀𝐶
Fake rate in data is then 𝑒𝑙
𝑑𝑎𝑡𝑎
= 𝑒−
𝑑𝑎𝑡𝑎 𝑅𝑙
𝑆𝐹𝑙 uncertainty primarily due to 𝑅𝑙.
73. Not for consumption, just objects needed or
previously used to create the presentation.
Testing slides
Sep 6 2011
73
74. My slide!74
A picture is worth 29,658 words…
High energy physics
Why measure the rate at which top quarks are produced?
Experimental apparatus
Where do we make top quarks and how do we observe them?
Grid computing
What computing facilities are required?
Event simulation, reconstruction, and selection
How do we know what to expect from top quarks?
How do we recognize different types of particles?
Discrimination
Can we differentiate between top quarks and other particles?
Correcting or replacing simulations
Some simulations aren‟t as good as others. How do we correct or
replace them using data?
Measuring cross section
How many collisions are from top quarks? What is the
uncertainty of this number?
75. Recipe for a particle physics plot
(ingredients to taste)
High energy physics
Why measure the rate at which
top quarks are produced?
Experimental apparatus
Where do we make top quarks
and how do we observe them?
Grid computing
What computing facilities are
required?
Event simulations, reconstruction,
and selection
How do we know what to
expect from top quarks?
How do we recognize different
types of particles?
Discrimination
Can we differentiate between
top quarks and other particles?
Correcting or replacing
simulations
Some simulations aren‟t as
good as others. How do we
correct or replace them using
data?
Measuring cross section
How many collisions are from
top quarks?
What is the uncertainty of
this number?
Sep 6 2011
75
76. Sep 6 2011
change
Systematic uncertainty
Grades
0 100
Students
90807060
Grades
0 10090807060
Average professor Nice professor
Average student grade
Professors
0 10090807060
Averageprofessors
Niceprofessors