This document outlines a presentation on thermodynamics of anti-de Sitter black holes as regularized fidelity susceptibility. It introduces concepts like entanglement entropy, the holographic principle, and holographic entanglement entropy. It then discusses fidelity susceptibility and holographic complexity. The document derives formulas for these quantities for Reissner-Nordström anti-de Sitter black holes. Specifically, it derives an expression for the difference in entanglement entropy between the pure background and metric deformation. It also computes the holographic complexity and fidelity susceptibility volumes for RN black holes through integrals involving the metric function.
System overflow blocking-transients-for-queues-with-batch-arrivals-using-a-fa...Cemal Ardil
This document summarizes a research paper that analyzes the transient behavior of the overflow probability in a queuing system with fixed-size batch arrivals. It introduces a set of polynomials that generalize Chebyshev polynomials and can be used to assess the transient behavior. The key findings are:
1≤k ≤ B
k≥B+1
which is just the generating function of the Chebyshev
polynomials of the second kind.
Furthermore, if we consider the special case when B = 1 in
(9), and make the substitution x → 2x, we obtain
k =0
0≤k ≤ B
Pk −1
λ
μ
E. Sefusatti, Tests of the Initial Conditions of the Universe after PlanckSEENET-MTP
This document outlines Emiliano Sefusatti's presentation on testing the initial conditions of the universe using data from the Planck satellite. The presentation covers predictions from inflation like a flat, homogeneous universe with a nearly scale-invariant power spectrum. It discusses how Planck improved constraints on non-Gaussianity parameters like fNL compared to WMAP. For example, Planck reduced errors on the local fNL parameter by a factor of 2-4 depending on the shape. The implications of Planck's results are explored through the example of constraints on a DBI inflation model.
D. Mladenov - On Integrable Systems in CosmologySEENET-MTP
Lecture by Prof. Dr. Dimitar Mladenov (Theoretical Physics Department, Faculty of Physics, Sofia University, Bulgaria) on December 7, 2011 at the Faculty of Science and Mathematics, Nis, Serbia.
N. Bilic - "Hamiltonian Method in the Braneworld" 3/3SEENET-MTP
This document discusses various models for dark energy and dark matter unification, including quintessence, k-essence, phantom quintessence, Chaplygin gas, and tachyon condensates. It provides field theory descriptions and equations of state for these models. It also discusses issues like the sound speed and structure formation problems that arise for some unified dark energy/dark matter models. Modifications to address these issues, such as generalized Chaplygin gas and variable Chaplygin gas models, are presented.
Full Sky Bispectrum in Redshift Space for 21cm Intensity MapsRahulKothari51
The document summarizes a presentation about calculating the full sky bispectrum in redshift space for 21cm intensity maps. It includes:
1) An overview of basic cosmological concepts needed to understand 21cm intensity and the bispectrum.
2) Details on calculating the HI intensity bispectrum, including contributions from linear and non-linear redshift space distortions.
3) Plots showing the bispectrum and the fractional contribution of redshift space distortions, which increase the bispectrum signal by about a factor of 5.
4) Estimates of the signal-to-noise ratio for detecting the bispectrum with future experiments like HIRAX, showing it may be detectable over certain multipole ranges
1) The document revisits Chapman's 1979 estimates of grid point requirements for large eddy simulation (LES) of turbulent boundary layers.
2) More accurate formulas for high Reynolds number boundary layer flow suggest new grid point requirements: wall-modeled LES scales as ReLx^2/7 and wall-resolving LES scales as ReLx^13/7, compared to Chapman's ReLx^2/5 and ReLx^9/5 estimates.
3) Direct numerical simulation (DNS) grid point requirements are estimated to scale as ReLx^37/14, compared to the typical ReLx^9/4 scaling.
This document outlines a presentation on thermodynamics of anti-de Sitter black holes as regularized fidelity susceptibility. It introduces concepts like entanglement entropy, the holographic principle, and holographic entanglement entropy. It then discusses fidelity susceptibility and holographic complexity. The document derives formulas for these quantities for Reissner-Nordström anti-de Sitter black holes. Specifically, it derives an expression for the difference in entanglement entropy between the pure background and metric deformation. It also computes the holographic complexity and fidelity susceptibility volumes for RN black holes through integrals involving the metric function.
System overflow blocking-transients-for-queues-with-batch-arrivals-using-a-fa...Cemal Ardil
This document summarizes a research paper that analyzes the transient behavior of the overflow probability in a queuing system with fixed-size batch arrivals. It introduces a set of polynomials that generalize Chebyshev polynomials and can be used to assess the transient behavior. The key findings are:
1≤k ≤ B
k≥B+1
which is just the generating function of the Chebyshev
polynomials of the second kind.
Furthermore, if we consider the special case when B = 1 in
(9), and make the substitution x → 2x, we obtain
k =0
0≤k ≤ B
Pk −1
λ
μ
E. Sefusatti, Tests of the Initial Conditions of the Universe after PlanckSEENET-MTP
This document outlines Emiliano Sefusatti's presentation on testing the initial conditions of the universe using data from the Planck satellite. The presentation covers predictions from inflation like a flat, homogeneous universe with a nearly scale-invariant power spectrum. It discusses how Planck improved constraints on non-Gaussianity parameters like fNL compared to WMAP. For example, Planck reduced errors on the local fNL parameter by a factor of 2-4 depending on the shape. The implications of Planck's results are explored through the example of constraints on a DBI inflation model.
D. Mladenov - On Integrable Systems in CosmologySEENET-MTP
Lecture by Prof. Dr. Dimitar Mladenov (Theoretical Physics Department, Faculty of Physics, Sofia University, Bulgaria) on December 7, 2011 at the Faculty of Science and Mathematics, Nis, Serbia.
N. Bilic - "Hamiltonian Method in the Braneworld" 3/3SEENET-MTP
This document discusses various models for dark energy and dark matter unification, including quintessence, k-essence, phantom quintessence, Chaplygin gas, and tachyon condensates. It provides field theory descriptions and equations of state for these models. It also discusses issues like the sound speed and structure formation problems that arise for some unified dark energy/dark matter models. Modifications to address these issues, such as generalized Chaplygin gas and variable Chaplygin gas models, are presented.
Full Sky Bispectrum in Redshift Space for 21cm Intensity MapsRahulKothari51
The document summarizes a presentation about calculating the full sky bispectrum in redshift space for 21cm intensity maps. It includes:
1) An overview of basic cosmological concepts needed to understand 21cm intensity and the bispectrum.
2) Details on calculating the HI intensity bispectrum, including contributions from linear and non-linear redshift space distortions.
3) Plots showing the bispectrum and the fractional contribution of redshift space distortions, which increase the bispectrum signal by about a factor of 5.
4) Estimates of the signal-to-noise ratio for detecting the bispectrum with future experiments like HIRAX, showing it may be detectable over certain multipole ranges
1) The document revisits Chapman's 1979 estimates of grid point requirements for large eddy simulation (LES) of turbulent boundary layers.
2) More accurate formulas for high Reynolds number boundary layer flow suggest new grid point requirements: wall-modeled LES scales as ReLx^2/7 and wall-resolving LES scales as ReLx^13/7, compared to Chapman's ReLx^2/5 and ReLx^9/5 estimates.
3) Direct numerical simulation (DNS) grid point requirements are estimated to scale as ReLx^37/14, compared to the typical ReLx^9/4 scaling.
N. Bilic - "Hamiltonian Method in the Braneworld" 1/3SEENET-MTP
This document provides an overview of the Hamiltonian method in braneworld cosmology. It begins with introductory lectures on preliminaries like Legendre transformations, thermodynamics, fluid mechanics, and basic cosmology. It then covers braneworld universes, strings and branes, and the Randall-Sundrum model. The document concludes with applications of the Hamiltonian method to topics like quintessence, dark energy/matter unification, and tachyon condensates in braneworlds.
N. Bilić: AdS Braneworld with Back-reactionSEENET-MTP
- A 3-brane moving in an AdS5 background of the Randall-Sundrum model behaves like a tachyon field with an inverse quartic potential.
- When including the back-reaction of the radion field, the tachyon Lagrangian is modified by its interaction with the radion. As a result, the effective equation of state obtained by averaging over large scales describes a warm dark matter.
- The dynamical brane causes two effects of back-reaction: 1) the geometric tachyon affects the bulk geometry, and 2) the back-reaction qualitatively changes the tachyon by forming a composite substance with the radion and a modified equation of state.
ON OPTIMIZATION OF MANUFACTURING OF MULTICHANNEL HETEROTRANSISTORS TO INCREAS...ijrap
In this paper we consider an approach to increase integration rate of field-effect heterotransistors. Framework
the approach we consider a heterostructure with specific configuration. After manufacturing the
heterostructure we consider doping of required areas of the heterostructure by diffusion or ion implantation.
The doping finished by optimized annealing of dopant and/or radiation defects. Framework this paper
we consider a possibility to manufacture with several channels. Manufacturing multi-channel transistors
gives us a possibility the to increase integration rate of transistors and to increase electrical current
through the transistor.
EXPECTED NUMBER OF LEVEL CROSSINGS OF A RANDOM TRIGONOMETRIC POLYNOMIALJournal For Research
This document presents research on the expected number of level crossings of random trigonometric polynomials. It begins with an abstract summarizing the background and objectives. The main results are then presented. It is shown that for random trigonometric polynomials of a certain form, with coefficients following a normal distribution, the expected number of real zeros in the interval from 0 to 2π is asymptotically equal to a particular expression involving n, as n becomes large. This expression is derived through applying the Kac-Rice formula and estimating a certain integral. The research builds on previous related works and clarifies results from one of those papers.
This presentation discusses using transformation optics and finite-difference time-domain (FDTD) simulations to design metamaterials that manipulate light in desired ways. It begins with an overview of transformation optics and how material parameters can be derived to effect a spatial transformation on light rays. An example of a "beam turner" is presented, along with the calculations to determine the required inhomogeneous, bi-anisotropic material properties. The presentation then discusses using FDTD simulations to model light propagation through these designed materials by discretizing Maxwell's equations in space and time. Examples shown include simulations of the beam turner and cloaking devices.
I. Cotaescu - "Canonical quantization of the covariant fields: the Dirac fiel...SEENET-MTP
The document discusses the canonical quantization of covariant fields on curved spacetimes, specifically the de Sitter spacetime. It introduces covariant fields that transform under representations of the spin group SL(2,C) and have covariant derivatives ensuring gauge invariance. Isometries of the spacetime generate Killing vectors and induce representations of the external symmetry group, which is the universal covering group of isometries and combines isometries with gauge transformations. Generators of these representations provide conserved observables that allow canonical quantization analogous to special relativity. The paper focuses on applying this framework to the Dirac field on de Sitter spacetime.
Paolo Creminelli "Dark Energy after GW170817"SEENET-MTP
GW170817 and GRB170817A were detected simultaneously, originating from the same source. This provides a tight limit on the speed of gravitational waves and photons, showing they travel at the same speed to a high degree of precision. The effective field theory of dark energy provides a framework to parametrize possible deviations from general relativity at cosmological scales. It allows gravitational waves and photons to potentially travel at different speeds depending on the coefficients in the effective field theory action. The detection of GW170817 and GRB170817A from the same source places strong constraints on these coefficients and possible dark energy models.
Mann. .black.holes.of.negative.mass.(1997)Ispas Elena
1) The document demonstrates that regions of negative energy density can undergo gravitational collapse into black holes with negative mass.
2) These black holes would have event horizons that are negatively curved compact surfaces, implying a non-trivial topology for the spacetime.
3) As an example, the collapse of a cloud of freely falling negative energy dust is modeled, showing it can form a black hole with the exterior geometry of a negative mass Schwarzschild-de Sitter metric, provided the initial negative energy density is not too large.
Stochastic Alternating Direction Method of MultipliersTaiji Suzuki
This document discusses stochastic optimization methods for solving regularized learning problems with structured regularization and large datasets. It proposes applying the alternating direction method of multipliers (ADMM) in a stochastic manner. Specifically, it introduces two stochastic ADMM methods for online data: RDA-ADMM, which extends regularized dual averaging with ADMM updates; and OPG-ADMM, which extends online proximal gradient descent with ADMM updates. These methods allow the regularization term to be optimized in batches, resolving computational difficulties, while the loss is optimized online using only a small number of samples per iteration.
This document is a dissertation proposal by Rishideep Roy at the University of Chicago in November 2014. The proposal is to generalize results on extreme values and entropic repulsion for two-dimensional discrete Gaussian free fields to a more general class of Gaussian fields with logarithmic correlations. Specifically, the proposal plans to find the convergence in law of the maximum of these log-correlated Gaussian fields under minimal assumptions, as well as obtain finer estimates on entropic repulsion which relates to the behavior of these fields near hard boundaries. The proposal provides background on related works and outlines the key steps to be taken, including proving expectations and tightness of maxima, invariance of maximum distributions under perturbations, approximating the fields, and
Zvonimir Vlah "Lagrangian perturbation theory for large scale structure forma...SEENET-MTP
This document discusses using Lagrangian perturbation theory and the effective field theory (EFT) approach to model large-scale structure (LSS) formation, including nonlinear effects. Key points include:
- The Lagrangian framework tracks fluid elements as they move due to gravity, described by a displacement field. This allows modeling of shell crossing nonlinearities.
- The EFT approach introduces a stress tensor to account for short-distance effects on long-wavelength modes. Counterterms are included to absorb uncertainties from neglected short-scale physics.
- Power spectrum and correlation function results from the Lagrangian EFT approach match those of the standard Eulerian EFT approach. The Lagrangian approach provides insights into counterterm structures and infrared resummation
The Inverse Scattering Series (ISS) is a direct inversion method
for a multidimensional acoustic, elastic and anelastic earth. It
communicates that all inversion processing goals are able to
be achieved directly and without any subsurface information.
This task is reached through a task-specific subseries of the
ISS. Using primaries in the data as subevents of the first-order
internal multiples, the leading-order attenuator can predict the
time of all the first-order internal multiples and is able to attenuate
them.
However, the ISS internal multiple attenuation algorithm can
be a computationally demanding method specially in a complex
earth. By using an approach that is based on two angular
quantities and that was proposed in Terenghi et al. (2012), the
cost of the algorithm can be controlled. The idea is to use the
two angles as key-control parameters, by limiting their variation,
to disregard some calculated contributions of the algorithm
that are negligible. Moreover, the range of integration
can be chosen as a compromise of the required degree of accuracy
and the computational time saving.
This time-saving approach is presented
- The document discusses a braneworld model where a 3-brane moves in a 5-dimensional anti-de Sitter bulk. The brane behaves effectively as a tachyon field with an inverse quartic potential.
- When the backreaction of the radion field (related to fluctuations of the brane position) is included, the tachyon Lagrangian is modified by its interaction with the radion. This results in an effective equation of state at large scales that describes "warm dark matter".
- The model extends the second Randall-Sundrum braneworld model to include nonlinear effects from the radion field, which distorts the anti-de Sitter geometry.
ON DECREASING OF DIMENSIONS OF FIELDEFFECT TRANSISTORS WITH SEVERAL SOURCESmsejjournal
This document analyzes mass and heat transport during manufacturing field-effect heterotransistors with several sources to decrease their dimensions. An analytical approach is introduced to model mass and heat transport during technological processes like doping and annealing. This approach accounts for nonlinearities in mass and heat transport and variations in physical parameters over space and time. The goal is to optimize doping distributions to increase compactness and homogeneity of transistor elements. Equations are developed to model concentration distributions of dopants and point defects over space and time during diffusion and ion implantation doping processes and subsequent annealing.
The all-electron GW method based on WIEN2k: Implementation and applications.ABDERRAHMANE REGGAD
The all-electron GW method based on WIEN2k:
Implementation and applications.
Ricardo I. G´omez-Abal
Fritz-Haber-Institut of the Max-Planck-Society
Faradayweg 4-6, D-14195, Berlin, Germany
15th. WIEN2k-Workshop
March, 29th. 2008
This document presents a closed-form solution for a class of discrete-time algebraic Riccati equations (DTAREs) under certain assumptions. It begins with background on Riccati equations and their importance in control theory. It then provides the assumptions considered, including that the A matrix eigenvalues are distinct. The main result is a closed-form solution for the DTARE when R=1. Extensions discussed include the solution's behavior as Q approaches zero and for repeated eigenvalues. Comparisons with numerical solutions verify the closed-form solution's accuracy.
R package 'bayesImageS': a case study in Bayesian computation using Rcpp and ...Matt Moores
There are many approaches to Bayesian computation with intractable likelihoods, including the exchange algorithm, approximate Bayesian computation (ABC), thermodynamic integration, and composite likelihood. These approaches vary in accuracy as well as scalability for datasets of significant size. The Potts model is an example where such methods are required, due to its intractable normalising constant. This model is a type of Markov random field, which is commonly used for image segmentation. The dimension of its parameter space increases linearly with the number of pixels in the image, making this a challenging application for scalable Bayesian computation. My talk will introduce various algorithms in the context of the Potts model and describe their implementation in C++, using OpenMP for parallelism. I will also discuss the process of releasing this software as an open source R package on the CRAN repository.
Alexei Starobinsky - Inflation: the present statusSEENET-MTP
This document summarizes a presentation on inflation and the present status of inflationary cosmology. It discusses the key epochs in the early universe, including inflation, and how inflation solved issues with prior models. Observational evidence for inflation is presented, including measurements of the primordial power spectrum and constraints on the tensor-to-scalar ratio. Simple single-field inflation models are shown to match observations. The document also discusses the generation of primordial perturbations from quantum fluctuations during inflation and how this provides the seeds for structure formation.
solution manual of goldsmith wireless communicationNIT Raipur
This document discusses various topics related to wireless communication systems including:
- Bursty data communication has advantages like narrow pulse widths and less transmission time but high bandwidth and peak power requirements.
- Error probability calculations show very high requirements for signal to noise ratio.
- Different types of satellite orbits have different transmission delays, with low earth orbit preferred for delays less than 30ms.
- Modeling voice and data users on a channel shows maximum revenue with 1 data user and 3 voice users.
- Smaller cell reuse distances allow more capacity but increase interference.
- Path loss calculations show much higher losses in urban versus rural environments due to more reflectors/scatterers.
- The two
The document summarizes a presentation on minimizing tensor estimation error using alternating minimization. It begins with an introduction to tensor decompositions including CP, Tucker, and tensor train decompositions. It then discusses nonparametric tensor estimation using an alternating minimization method. The method iteratively updates components while holding other components fixed, achieving efficient computation. The analysis shows that after t iterations, the estimation error is bounded by the sum of a statistical error term and an optimization error term decaying exponentially in t. Real data analysis uses the method for multitask learning.
N. Bilic - "Hamiltonian Method in the Braneworld" 1/3SEENET-MTP
This document provides an overview of the Hamiltonian method in braneworld cosmology. It begins with introductory lectures on preliminaries like Legendre transformations, thermodynamics, fluid mechanics, and basic cosmology. It then covers braneworld universes, strings and branes, and the Randall-Sundrum model. The document concludes with applications of the Hamiltonian method to topics like quintessence, dark energy/matter unification, and tachyon condensates in braneworlds.
N. Bilić: AdS Braneworld with Back-reactionSEENET-MTP
- A 3-brane moving in an AdS5 background of the Randall-Sundrum model behaves like a tachyon field with an inverse quartic potential.
- When including the back-reaction of the radion field, the tachyon Lagrangian is modified by its interaction with the radion. As a result, the effective equation of state obtained by averaging over large scales describes a warm dark matter.
- The dynamical brane causes two effects of back-reaction: 1) the geometric tachyon affects the bulk geometry, and 2) the back-reaction qualitatively changes the tachyon by forming a composite substance with the radion and a modified equation of state.
ON OPTIMIZATION OF MANUFACTURING OF MULTICHANNEL HETEROTRANSISTORS TO INCREAS...ijrap
In this paper we consider an approach to increase integration rate of field-effect heterotransistors. Framework
the approach we consider a heterostructure with specific configuration. After manufacturing the
heterostructure we consider doping of required areas of the heterostructure by diffusion or ion implantation.
The doping finished by optimized annealing of dopant and/or radiation defects. Framework this paper
we consider a possibility to manufacture with several channels. Manufacturing multi-channel transistors
gives us a possibility the to increase integration rate of transistors and to increase electrical current
through the transistor.
EXPECTED NUMBER OF LEVEL CROSSINGS OF A RANDOM TRIGONOMETRIC POLYNOMIALJournal For Research
This document presents research on the expected number of level crossings of random trigonometric polynomials. It begins with an abstract summarizing the background and objectives. The main results are then presented. It is shown that for random trigonometric polynomials of a certain form, with coefficients following a normal distribution, the expected number of real zeros in the interval from 0 to 2π is asymptotically equal to a particular expression involving n, as n becomes large. This expression is derived through applying the Kac-Rice formula and estimating a certain integral. The research builds on previous related works and clarifies results from one of those papers.
This presentation discusses using transformation optics and finite-difference time-domain (FDTD) simulations to design metamaterials that manipulate light in desired ways. It begins with an overview of transformation optics and how material parameters can be derived to effect a spatial transformation on light rays. An example of a "beam turner" is presented, along with the calculations to determine the required inhomogeneous, bi-anisotropic material properties. The presentation then discusses using FDTD simulations to model light propagation through these designed materials by discretizing Maxwell's equations in space and time. Examples shown include simulations of the beam turner and cloaking devices.
I. Cotaescu - "Canonical quantization of the covariant fields: the Dirac fiel...SEENET-MTP
The document discusses the canonical quantization of covariant fields on curved spacetimes, specifically the de Sitter spacetime. It introduces covariant fields that transform under representations of the spin group SL(2,C) and have covariant derivatives ensuring gauge invariance. Isometries of the spacetime generate Killing vectors and induce representations of the external symmetry group, which is the universal covering group of isometries and combines isometries with gauge transformations. Generators of these representations provide conserved observables that allow canonical quantization analogous to special relativity. The paper focuses on applying this framework to the Dirac field on de Sitter spacetime.
Paolo Creminelli "Dark Energy after GW170817"SEENET-MTP
GW170817 and GRB170817A were detected simultaneously, originating from the same source. This provides a tight limit on the speed of gravitational waves and photons, showing they travel at the same speed to a high degree of precision. The effective field theory of dark energy provides a framework to parametrize possible deviations from general relativity at cosmological scales. It allows gravitational waves and photons to potentially travel at different speeds depending on the coefficients in the effective field theory action. The detection of GW170817 and GRB170817A from the same source places strong constraints on these coefficients and possible dark energy models.
Mann. .black.holes.of.negative.mass.(1997)Ispas Elena
1) The document demonstrates that regions of negative energy density can undergo gravitational collapse into black holes with negative mass.
2) These black holes would have event horizons that are negatively curved compact surfaces, implying a non-trivial topology for the spacetime.
3) As an example, the collapse of a cloud of freely falling negative energy dust is modeled, showing it can form a black hole with the exterior geometry of a negative mass Schwarzschild-de Sitter metric, provided the initial negative energy density is not too large.
Stochastic Alternating Direction Method of MultipliersTaiji Suzuki
This document discusses stochastic optimization methods for solving regularized learning problems with structured regularization and large datasets. It proposes applying the alternating direction method of multipliers (ADMM) in a stochastic manner. Specifically, it introduces two stochastic ADMM methods for online data: RDA-ADMM, which extends regularized dual averaging with ADMM updates; and OPG-ADMM, which extends online proximal gradient descent with ADMM updates. These methods allow the regularization term to be optimized in batches, resolving computational difficulties, while the loss is optimized online using only a small number of samples per iteration.
This document is a dissertation proposal by Rishideep Roy at the University of Chicago in November 2014. The proposal is to generalize results on extreme values and entropic repulsion for two-dimensional discrete Gaussian free fields to a more general class of Gaussian fields with logarithmic correlations. Specifically, the proposal plans to find the convergence in law of the maximum of these log-correlated Gaussian fields under minimal assumptions, as well as obtain finer estimates on entropic repulsion which relates to the behavior of these fields near hard boundaries. The proposal provides background on related works and outlines the key steps to be taken, including proving expectations and tightness of maxima, invariance of maximum distributions under perturbations, approximating the fields, and
Zvonimir Vlah "Lagrangian perturbation theory for large scale structure forma...SEENET-MTP
This document discusses using Lagrangian perturbation theory and the effective field theory (EFT) approach to model large-scale structure (LSS) formation, including nonlinear effects. Key points include:
- The Lagrangian framework tracks fluid elements as they move due to gravity, described by a displacement field. This allows modeling of shell crossing nonlinearities.
- The EFT approach introduces a stress tensor to account for short-distance effects on long-wavelength modes. Counterterms are included to absorb uncertainties from neglected short-scale physics.
- Power spectrum and correlation function results from the Lagrangian EFT approach match those of the standard Eulerian EFT approach. The Lagrangian approach provides insights into counterterm structures and infrared resummation
The Inverse Scattering Series (ISS) is a direct inversion method
for a multidimensional acoustic, elastic and anelastic earth. It
communicates that all inversion processing goals are able to
be achieved directly and without any subsurface information.
This task is reached through a task-specific subseries of the
ISS. Using primaries in the data as subevents of the first-order
internal multiples, the leading-order attenuator can predict the
time of all the first-order internal multiples and is able to attenuate
them.
However, the ISS internal multiple attenuation algorithm can
be a computationally demanding method specially in a complex
earth. By using an approach that is based on two angular
quantities and that was proposed in Terenghi et al. (2012), the
cost of the algorithm can be controlled. The idea is to use the
two angles as key-control parameters, by limiting their variation,
to disregard some calculated contributions of the algorithm
that are negligible. Moreover, the range of integration
can be chosen as a compromise of the required degree of accuracy
and the computational time saving.
This time-saving approach is presented
- The document discusses a braneworld model where a 3-brane moves in a 5-dimensional anti-de Sitter bulk. The brane behaves effectively as a tachyon field with an inverse quartic potential.
- When the backreaction of the radion field (related to fluctuations of the brane position) is included, the tachyon Lagrangian is modified by its interaction with the radion. This results in an effective equation of state at large scales that describes "warm dark matter".
- The model extends the second Randall-Sundrum braneworld model to include nonlinear effects from the radion field, which distorts the anti-de Sitter geometry.
ON DECREASING OF DIMENSIONS OF FIELDEFFECT TRANSISTORS WITH SEVERAL SOURCESmsejjournal
This document analyzes mass and heat transport during manufacturing field-effect heterotransistors with several sources to decrease their dimensions. An analytical approach is introduced to model mass and heat transport during technological processes like doping and annealing. This approach accounts for nonlinearities in mass and heat transport and variations in physical parameters over space and time. The goal is to optimize doping distributions to increase compactness and homogeneity of transistor elements. Equations are developed to model concentration distributions of dopants and point defects over space and time during diffusion and ion implantation doping processes and subsequent annealing.
The all-electron GW method based on WIEN2k: Implementation and applications.ABDERRAHMANE REGGAD
The all-electron GW method based on WIEN2k:
Implementation and applications.
Ricardo I. G´omez-Abal
Fritz-Haber-Institut of the Max-Planck-Society
Faradayweg 4-6, D-14195, Berlin, Germany
15th. WIEN2k-Workshop
March, 29th. 2008
This document presents a closed-form solution for a class of discrete-time algebraic Riccati equations (DTAREs) under certain assumptions. It begins with background on Riccati equations and their importance in control theory. It then provides the assumptions considered, including that the A matrix eigenvalues are distinct. The main result is a closed-form solution for the DTARE when R=1. Extensions discussed include the solution's behavior as Q approaches zero and for repeated eigenvalues. Comparisons with numerical solutions verify the closed-form solution's accuracy.
R package 'bayesImageS': a case study in Bayesian computation using Rcpp and ...Matt Moores
There are many approaches to Bayesian computation with intractable likelihoods, including the exchange algorithm, approximate Bayesian computation (ABC), thermodynamic integration, and composite likelihood. These approaches vary in accuracy as well as scalability for datasets of significant size. The Potts model is an example where such methods are required, due to its intractable normalising constant. This model is a type of Markov random field, which is commonly used for image segmentation. The dimension of its parameter space increases linearly with the number of pixels in the image, making this a challenging application for scalable Bayesian computation. My talk will introduce various algorithms in the context of the Potts model and describe their implementation in C++, using OpenMP for parallelism. I will also discuss the process of releasing this software as an open source R package on the CRAN repository.
Alexei Starobinsky - Inflation: the present statusSEENET-MTP
This document summarizes a presentation on inflation and the present status of inflationary cosmology. It discusses the key epochs in the early universe, including inflation, and how inflation solved issues with prior models. Observational evidence for inflation is presented, including measurements of the primordial power spectrum and constraints on the tensor-to-scalar ratio. Simple single-field inflation models are shown to match observations. The document also discusses the generation of primordial perturbations from quantum fluctuations during inflation and how this provides the seeds for structure formation.
solution manual of goldsmith wireless communicationNIT Raipur
This document discusses various topics related to wireless communication systems including:
- Bursty data communication has advantages like narrow pulse widths and less transmission time but high bandwidth and peak power requirements.
- Error probability calculations show very high requirements for signal to noise ratio.
- Different types of satellite orbits have different transmission delays, with low earth orbit preferred for delays less than 30ms.
- Modeling voice and data users on a channel shows maximum revenue with 1 data user and 3 voice users.
- Smaller cell reuse distances allow more capacity but increase interference.
- Path loss calculations show much higher losses in urban versus rural environments due to more reflectors/scatterers.
- The two
The document summarizes a presentation on minimizing tensor estimation error using alternating minimization. It begins with an introduction to tensor decompositions including CP, Tucker, and tensor train decompositions. It then discusses nonparametric tensor estimation using an alternating minimization method. The method iteratively updates components while holding other components fixed, achieving efficient computation. The analysis shows that after t iterations, the estimation error is bounded by the sum of a statistical error term and an optimization error term decaying exponentially in t. Real data analysis uses the method for multitask learning.
Multidimensional integrals may be approximated by weighted averages of integrand values. Quasi-Monte Carlo (QMC) methods are more accurate than simple Monte Carlo methods because they carefully choose where to evaluate the integrand. This tutorial focuses on how quickly QMC methods converge to the correct answer as the number of integrand values increases. The answer may depend on the smoothness of the integrand and the sophistication of the QMC method. QMC error analysis may assumes the integrand belongs to a reproducing kernel Hilbert space or may assume that the integrand is an instance of a stochastic process with known covariance structure. These two approaches have interesting parallels. This tutorial also explores how the computational cost of achieving a good approximation to the integral depends on the dimension of the domain of the integrand. Finally, this tutorial explores methods for determining how many integrand values are needed to satisfy the error tolerance. Relevant software is described.
The document discusses error analysis for quasi-Monte Carlo methods used for numerical integration. It introduces the concepts of reproducing kernel Hilbert spaces and mean square discrepancy to analyze integration error. Specifically, it shows that the mean square discrepancy of randomized low-discrepancy point sets can be computed in O(n) operations, whereas the standard discrepancy requires O(n^2) operations, making randomized quasi-Monte Carlo methods more efficient for high-dimensional integration problems.
1) The document discusses instanton solutions in AdS4/CFT3 correspondence. Instantons represent exact solutions that allow probing the CFT away from the conformal vacuum.
2) For a conformally coupled scalar field, the bulk admits instanton solutions that correspond to an instability of AdS4. The boundary effective action is a 2+1 conformally coupled scalar with a φ6 interaction, representing a vacuum instability.
3) Tunneling probabilities between vacua can be computed from the instanton solutions, representing gravitational tunneling effects in the bulk and phase transitions in the boundary CFT.
This document summarizes Andrew Hone's talk on reductions of the discrete Hirota (discrete KP) equation. Plane wave reductions of the discrete Hirota equation yield Somos-type recurrence relations. Reductions of the discrete Hirota Lax pair give scalar Lax pairs with spectral parameters. Certain reductions produce periodic coefficients, leading to cluster algebra structures. Reductions of the discrete KdV equation are also considered, giving bi-Hamiltonian structures.
Using Gauss' law, the document calculates the electric field density (D) at various points due to different charge distributions including: an infinite straight line charge, infinite plane sheet charge, solid spherical charge, and two point charges. It determines D for charge distributions located along axes, sheets, and within spheres using the relevant equations in cylindrical, Cartesian and spherical coordinate systems. The document also calculates volume charge density from given displacement densities and determines displacement density from point charge locations and values.
Sergey Sibiryakov "Galactic rotation curves vs. ultra-light dark matter: Impl...SEENET-MTP
The document discusses ultra-light dark matter and its implications for galactic rotation curves. It begins by providing theoretical background on ultra-light dark matter and how it can form soliton cores within dark matter halos. It then discusses how the properties of these soliton cores, such as their mass and size, relate to the properties of the ultra-light dark matter particle. Finally, it discusses how measurements of galactic rotation curves could provide insights into ultra-light dark matter models by probing the presence and characteristics of these soliton cores.
This document summarizes a study that applies a recently developed effective theory called SCETG to model jet quenching in heavy ion collisions at the LHC. SCETG allows for the unified treatment of vacuum and medium-induced parton showers. The authors establish an analytic connection between the QCD evolution approach and traditional energy loss approach in the soft gluon emission limit. They quantify uncertainties in implementing in-medium modifications to hadron production cross sections and find the coupling between jets and the medium can be constrained to better than 10% accuracy. Numerical comparisons between the medium-modified evolution approach and energy loss formalism for modeling RAA are also presented.
Non-informative reparametrisation for location-scale mixturesChristian Robert
1) The document proposes a new reparameterization of location-scale mixtures in terms of the global mean and variance of the mixture distribution. This constrains the component parameters to a specific region of parameter space.
2) A Metropolis-within-Gibbs algorithm is developed for the reparameterized mixture model and implemented in the Ultimixt R package. This allows accurate estimation of component parameters through MCMC.
3) The reparameterization is shown to work well for mixtures of normal distributions, allowing good mixing of chains and convergence to the true densities, even in overfitted cases.
The document discusses harmonic maps from the Riemann surface M=S1×R or CP1\{0,1,∞} into the complex projective space CPn. It presents the DPW method for constructing harmonic maps using loop groups. Specifically, it constructs equivariant harmonic maps in CPn from degree one potentials in the loop algebra Λgσ, relating these to whether the maps are isotropic, weakly conformal, or non-conformal. It then considers the system of ODEs and scalar ODE that must be solved to generate the harmonic maps using this method.
This document provides an overview of a computational materials science lecture at Tokyo Tech. The lecture will cover first principles calculations, focusing on numerical analysis of electronic states. First principles calculations determine electronic states without experimental parameters by only using fundamental physical constants and numerical parameters. The lecture notes can be downloaded online and questions are welcome. Example materials that will be discussed include graphene and magnetic materials interfaces. Computational methods like density functional theory and plane wave basis sets will be introduced.
This document provides an overview of structural dynamics and single degree of freedom (SDOF) systems. It defines dynamic degrees of freedom and idealizes structures as SDOF systems with mass, spring, and damper elements. The document derives equations of motion for various SDOF systems using Newton's second law and D'Alembert's principle. It also discusses free vibration of undamped systems and determination of natural frequencies. Methods to determine the effective stiffness and natural frequency of SDOF systems are presented through examples.
Transport coefficients of QGP in strong magnetic fieldsDaisuke Satow
1. Transport coefficients of QGP in strong magnetic fields are discussed, including electrical conductivity.
2. Quarks exhibit Landau quantization in strong magnetic fields, with their motion confined to one dimension along the field. Gluons also acquire an effective mass through interactions with quarks.
3. Electrical conductivity is calculated using the linear response theory and Kubo formula. It is found that only the component along the magnetic field is non-zero, in contrast to the isotropic conductivity at weak fields.
1) The document derives the Bogoliubov-de Gennes (BdG) equation to solve for the eigenstates and eigenenergies of a superconductor with a single static magnetic impurity.
2) Using the BdG equation and Nambu spinor formalism, analytic expressions are obtained for the subgap Shiba states induced by the magnetic impurity. The Shiba state energies are given by E0 = ∆(1 - α2)/(1 + α2), where α is the dimensionless impurity strength.
3) The BdG spinor eigenstates for the Shiba states are derived. They consist of spin-up electrons and spin-down holes for one state, and spin
A new implementation of k-MLE for mixture modelling of Wishart distributionsFrank Nielsen
This document discusses a new implementation of k-MLE for mixture modelling of Wishart distributions. It begins with an overview of the Wishart distribution and its properties as an exponential family. It then describes the original k-MLE algorithm and how it can be adapted for Wishart distributions by using Hartigan and Wang's strategy instead of Lloyd's strategy to avoid empty clusters. The document also discusses approaches for initializing the clusters, such as k-means++, and proposes a heuristic to determine the number of clusters on-the-fly rather than fixing k.
1) The document discusses double trace flows in dS/CFT correspondence by calculating the time-evolving wavefunction Ψ in de Sitter space.
2) It derives the beta function for the double trace coupling λ in the dual CFT using holographic renormalization group techniques. The beta function matches expectations from large N field theory arguments.
3) It also shows the beta function derivation matches that obtained from the AdS/CFT correspondence upon analytic continuation, providing evidence that dS/CFT may describe the dual of quantum gravity in de Sitter space.
Computing the masses of hyperons and charmed baryons from Lattice QCDChristos Kallidonis
Poster presented at the Computational Sciences 2013 Conference (Winner of poster competition). We present results on the masses of all forty light, strange and charm baryons from Lattice QCD simulations, focusing particularly on the computational aspects and requirements of such calculations.
Similar to Large-scale structure non-Gaussianities with modal methods (Ascona) (20)
Future cosmology with CMB lensing and galaxy clusteringMarcel Schmittfull
Next-generation Cosmic Microwave Background experiments such as the Simons Observatory, CMB-S4 and PICO aim to measure gravitational lensing of the Cosmic Microwave Background an order of magnitude better than current experiments. The lensing signal will be highly correlated with measurements of galaxy clustering from next-generation galaxy surveys such as LSST. This will help us understand whether cosmic inflation was driven by a single field or by multiple fields. It will also allow us to accurately measure the growth of structure as a function of time, which is a powerful probe of dark energy and the sum of neutrino masses. I will discuss the prospects for this, as well as recent progress on the theoretical modeling of galaxy clustering, which is key to realize the full potential of these anticipated datasets.
Prospects for CMB lensing-galaxy clustering cross-correlations and modeling b...Marcel Schmittfull
The lensing convergence measurable with future CMB experiments will be highly correlated with the clustering of galaxies that will be observed by deep imaging surveys such as LSST. I will discuss prospects for using that cross-correlation signal to constrain inflation models (fnl) and the sum of neutrino masses. A key limitation of such large-scale structure analyses is that dark matter halos and the galaxies within them only form at peaks of the dark matter density distribution, which is challenging to model. I will present recent work on this, where we found that a new basis of operators that account for large bulk flows successfully describes simulated halos at the field level over a surprisingly wide range of scales.
Talk given by Marcel Schmittfull at IPMU, University of Tokyo, Oct 31st 2019
Extracting linear information from nonlinear large-scale structure observationsMarcel Schmittfull
This document discusses methods for extracting linear cosmological information from observations of nonlinear large-scale structure. It describes four paradigms for doing reconstruction to reduce nonlinear effects: 1) Lagrangian reconstruction, 2) forward modeling and sampling, 3) forward modeling and optimization, and 4) machine learning approaches. Two specific reconstruction algorithms are outlined: 1) an isobaric/nonlinear method using a moving mesh code, and 2) an iterative method applying Zeldovich displacements from large to small scales. Results show the iterative method better correlates reconstructed densities to initial conditions over a wider range of scales compared to standard reconstruction.
Prospects for CMB lensing-galaxy clustering cross-correlations and initial co...Marcel Schmittfull
The lensing convergence measurable with future CMB experiments will be highly correlated with the clustering of galaxies that will be observed by imaging surveys such as LSST. I will discuss prospects for using that cross-correlation signal to constrain local primordial non-Gaussianity, the amplitude of matter fluctuations as a function of redshift, halo bias, and possibly the sum of neutrino masses. A key limitation for such analyses and large-scale structure analyses in general is that the mapping from initial conditions to observables is nonlinear for wavenumbers k>0.1h/Mpc. This can destroy cosmological information or move it to non-Gaussian tails of the probability distribution that are difficult to measure. I will describe how we can use recently developed initial condition reconstruction methods to help us recover some of that information in the nonlinear regime.
As recent and future galaxy surveys map the large-scale structure of the universe with unprecedented pace and precision, it is worthwhile to consider innovative data analysis methods beyond traditional Gaussian 2-point statistics to extract more cosmological information from those datasets. Such efforts are often plagued by substantially increased complexity of the analysis. Hoping to improve this, I will present simple, nearly optimal methods to measure 3-point statistics as easily as 2-point statistics, by cross-correlating the mass density with specific quadratic fields [arXiv:1411.6595]. Inspired by these results, I will argue that BAO reconstructions already combine 2-point statistics with certain 3- and 4-point functions automatically [arXiv:1508.06972]. I will present several new Eulerian and Lagrangian reconstruction algorithms and discuss their performance in simulations.
Joint analysis of CMB temperature and lensing-reconstruction power spectraMarcel Schmittfull
Talk given by Marcel Schmittfull at The Pacific Cosmology Cooperative (PaCCo) 2014 workshop at JPL/Caltech, Pasadena
Topic: Combining CMB lensing reconstruction with CMB power spectrum measurements
Based on the paper http://arxiv.org/abs/1308.0286
The technology uses reclaimed CO₂ as the dyeing medium in a closed loop process. When pressurized, CO₂ becomes supercritical (SC-CO₂). In this state CO₂ has a very high solvent power, allowing the dye to dissolve easily.
PPT on Direct Seeded Rice presented at the three-day 'Training and Validation Workshop on Modules of Climate Smart Agriculture (CSA) Technologies in South Asia' workshop on April 22, 2024.
The binding of cosmological structures by massless topological defectsSérgio Sacani
Assuming spherical symmetry and weak field, it is shown that if one solves the Poisson equation or the Einstein field
equations sourced by a topological defect, i.e. a singularity of a very specific form, the result is a localized gravitational
field capable of driving flat rotation (i.e. Keplerian circular orbits at a constant speed for all radii) of test masses on a thin
spherical shell without any underlying mass. Moreover, a large-scale structure which exploits this solution by assembling
concentrically a number of such topological defects can establish a flat stellar or galactic rotation curve, and can also deflect
light in the same manner as an equipotential (isothermal) sphere. Thus, the need for dark matter or modified gravity theory is
mitigated, at least in part.
When I was asked to give a companion lecture in support of ‘The Philosophy of Science’ (https://shorturl.at/4pUXz) I decided not to walk through the detail of the many methodologies in order of use. Instead, I chose to employ a long standing, and ongoing, scientific development as an exemplar. And so, I chose the ever evolving story of Thermodynamics as a scientific investigation at its best.
Conducted over a period of >200 years, Thermodynamics R&D, and application, benefitted from the highest levels of professionalism, collaboration, and technical thoroughness. New layers of application, methodology, and practice were made possible by the progressive advance of technology. In turn, this has seen measurement and modelling accuracy continually improved at a micro and macro level.
Perhaps most importantly, Thermodynamics rapidly became a primary tool in the advance of applied science/engineering/technology, spanning micro-tech, to aerospace and cosmology. I can think of no better a story to illustrate the breadth of scientific methodologies and applications at their best.
Immersive Learning That Works: Research Grounding and Paths ForwardLeonel Morgado
We will metaverse into the essence of immersive learning, into its three dimensions and conceptual models. This approach encompasses elements from teaching methodologies to social involvement, through organizational concerns and technologies. Challenging the perception of learning as knowledge transfer, we introduce a 'Uses, Practices & Strategies' model operationalized by the 'Immersive Learning Brain' and ‘Immersion Cube’ frameworks. This approach offers a comprehensive guide through the intricacies of immersive educational experiences and spotlighting research frontiers, along the immersion dimensions of system, narrative, and agency. Our discourse extends to stakeholders beyond the academic sphere, addressing the interests of technologists, instructional designers, and policymakers. We span various contexts, from formal education to organizational transformation to the new horizon of an AI-pervasive society. This keynote aims to unite the iLRN community in a collaborative journey towards a future where immersive learning research and practice coalesce, paving the way for innovative educational research and practice landscapes.
hematic appreciation test is a psychological assessment tool used to measure an individual's appreciation and understanding of specific themes or topics. This test helps to evaluate an individual's ability to connect different ideas and concepts within a given theme, as well as their overall comprehension and interpretation skills. The results of the test can provide valuable insights into an individual's cognitive abilities, creativity, and critical thinking skills
The debris of the ‘last major merger’ is dynamically youngSérgio Sacani
The Milky Way’s (MW) inner stellar halo contains an [Fe/H]-rich component with highly eccentric orbits, often referred to as the
‘last major merger.’ Hypotheses for the origin of this component include Gaia-Sausage/Enceladus (GSE), where the progenitor
collided with the MW proto-disc 8–11 Gyr ago, and the Virgo Radial Merger (VRM), where the progenitor collided with the
MW disc within the last 3 Gyr. These two scenarios make different predictions about observable structure in local phase space,
because the morphology of debris depends on how long it has had to phase mix. The recently identified phase-space folds in Gaia
DR3 have positive caustic velocities, making them fundamentally different than the phase-mixed chevrons found in simulations
at late times. Roughly 20 per cent of the stars in the prograde local stellar halo are associated with the observed caustics. Based
on a simple phase-mixing model, the observed number of caustics are consistent with a merger that occurred 1–2 Gyr ago.
We also compare the observed phase-space distribution to FIRE-2 Latte simulations of GSE-like mergers, using a quantitative
measurement of phase mixing (2D causticality). The observed local phase-space distribution best matches the simulated data
1–2 Gyr after collision, and certainly not later than 3 Gyr. This is further evidence that the progenitor of the ‘last major merger’
did not collide with the MW proto-disc at early times, as is thought for the GSE, but instead collided with the MW disc within
the last few Gyr, consistent with the body of work surrounding the VRM.
Authoring a personal GPT for your research and practice: How we created the Q...Leonel Morgado
Thematic analysis in qualitative research is a time-consuming and systematic task, typically done using teams. Team members must ground their activities on common understandings of the major concepts underlying the thematic analysis, and define criteria for its development. However, conceptual misunderstandings, equivocations, and lack of adherence to criteria are challenges to the quality and speed of this process. Given the distributed and uncertain nature of this process, we wondered if the tasks in thematic analysis could be supported by readily available artificial intelligence chatbots. Our early efforts point to potential benefits: not just saving time in the coding process but better adherence to criteria and grounding, by increasing triangulation between humans and artificial intelligence. This tutorial will provide a description and demonstration of the process we followed, as two academic researchers, to develop a custom ChatGPT to assist with qualitative coding in the thematic data analysis process of immersive learning accounts in a survey of the academic literature: QUAL-E Immersive Learning Thematic Analysis Helper. In the hands-on time, participants will try out QUAL-E and develop their ideas for their own qualitative coding ChatGPT. Participants that have the paid ChatGPT Plus subscription can create a draft of their assistants. The organizers will provide course materials and slide deck that participants will be able to utilize to continue development of their custom GPT. The paid subscription to ChatGPT Plus is not required to participate in this workshop, just for trying out personal GPTs during it.
Authoring a personal GPT for your research and practice: How we created the Q...
Large-scale structure non-Gaussianities with modal methods (Ascona)
1. LARGE SCALE STRUCTURE
NON-GAUSSIANITIES
WITH MODAL METHODS
Marcel Schmittfull
DAMTP, University of Cambridge
Collaborators: Paul Shellard, Donough Regan, James Fergusson
Based on 1108.3813, 1207.5678
LSS13, Ascona, 3 July 2013
2. PRIMORDIAL MOTIVATION
LSS = most promising window for non-Gaussianity after Planck
(3d data; single field inflation can be ruled out with halo bias; BOSS, DES, Euclid, ...)
Local template/linear regime rather well understood
But: Effects beyond local template/linear regime not well studied
Dalal et al. 2008, Jeong/Komatsu 2009, Sefusatti et al.
2009-2012, Baldauf/Seljak/Senatore 2011, ...
Vacuum expectation value of a quantum field perturbation with inflationary Lagrangian
Free theory is Gaussian
Interacting theory is non-Gaussian
possible for all n, k-dependence characterises interactions
➟ Inflationary interactions are mapped to specific types of non-Gaussianity
h⌦| 'k1
· · · 'kn
|⌦i =
(
determined by 2-point function, n even,
0, n odd.
h⌦| 'k1
· · · 'kn
|⌦i 6= 0
h⌦| 'k1 · · · 'kn |⌦i =
R
D[ '] 'k1
· · · 'kn
exp(i
R
C
L( 'k))
R
D[ ']exp(i
R
C
L( 'k))
L ⇠ '2
) ei
R
C
L
L ⇠ '3
, '4
, . . . ) ei
R
C
L
' L
3. Complementary late-time motivation for LSS bispectrum:
LATE-TIME MOTIVATION
Fry 1994
Verde et al. 1997-2002
Scoccimarro et al. 1998
Sefusatti et al. 2006
But: No LSS bispectrum pipeline
Complicated modelling (non-linear DM, bias, RSD, correlations, systematics, ...)
Ph(k) ⇡ b2
1P (k) Bh(k1, k2, k3) ⇡ b3
1B (k1, k2, k3) + b2
1b2(P (k1)P (k2) + perms)
Halo power spectrum Halo bispectrum
Break b1-!m degeneracy, i.e. can
measure !m from LSS alone
Measure b2
Cannot distinguish
rescaling of P" from b1
➟ b1-!m degeneracy
From (naive) we geth(x) ⇠ b1 (x) +
1
2
b2 (x)2
+ · · ·
4. BISPECTRUM
Bispectrum
Primary diagnostic for non-Gaussianity because vanishes for Gaussian perturbations
Defined for closed triangles (statistical homogeneity and isotropy)
h (k1) (k2) (k3)i = (2⇡)3
D(k1 + k2 + k3)fNLB (k1, k2, k3)
non-linear
amplitude
bispectrum
k1
k2
k3
sum of the two smaller sides ≥ longest side
(every point corresponds to a triangle config.)
)
k3
kmax
kmax k1
k2
Bispectrum domain = ‘tetrapyd’
(=space of triangle configurations)
5. BISPECTRUM SHAPES
)
k3
kmax
kmax k1
k2
Squeezed triangles (local shape)
Arises in multifield inflation;
detection would rule out all
single field models!
Different inflation models induce different momentum dependencies (shapes) of B (k1, k2, k3)
k2 ⇡ k3 k1
k1
k2
k3
Equilateral triangles
k1 ⇡ k2 ⇡ k3
k3
kmax
kmax k1
k2
k3
kmax
kmax k1
k2
Typically higher derivative
kinetic terms, e.g. DBI inflation
Folded triangles
k3 = 2k1 = 2k2
k2
k3
k1
E.g. non-Bunch-Davies vacuum
7. INITIAL CONDITIONS
Fast and general non-Gaussian initial conditions for N-body simulations
Arbitrary (including non-separable) bispectra, diagonal-independent trispectra
This is the only method to simulate structure formation for general inflation models to date
Idea:
↵0= +↵1 +↵2 + · · · + ↵nmaxWB
bispectrum drawn on space of
triangle configurations
expansion in separable, uncorrelated basis functions
(around 100 basis functions represent all investigated
bispectra with high accuracy)
Regan, MS, Shellard, Fergusson PRD 86, 123524 (2012), arXiv:1108.3813
• Generate non-Gaussian density
• Convert to initial particle positions and velocities by applying 2LPT to glass
configuration or regular grid (spurious bispectrum at high z decays at low z)
• Feed into Gadget3
Application:
8. Simplest case: local non-Gaussianity
General case: arbitrary bispectrum
Wagner et al 2010
(=2 in local case)WB(k, k0
, k00
) ⌘
B (k, k0
, k00
)
P (k)P (k0) + P (k)P (k00) + P (k0)P (k00)
INITIAL CONDITIONS: MATHS
Aim: Create non-Gaussian field
B
Fergusson, Regan, Shellard PRD 86, 063511 (2012), arXiv: 1008.1730
Regan, MS, Shellard, Fergusson PRD 86, 123524 (2012), arXiv:1108.3813
Requires ~N6 operations in general, but only ~N3 operations if WB was separable:
WB(k, k0
, k00
) = f1(k)f2(k0
)f3(k00
) + perms
full field Gaussian
(x) = G(x) + NG(x)
non-Gaussian part
(x) = G(x) + fNL( 2
G(x) h 2
Gi)
Expanding WB in separable basis functions gives N3 scaling for any* bispectrum
*Scoccimarro and Verde groups try to rewrite WB analytically in separable form; this works sometimes, but not in general
NG(k) =
fNL
2
Z
d3
k0
d3
k00
(2⇡)3 D(k k0
k00
)WB(k, k0
, k00
) G(k0
) G(k00
)
N =
#ptcles
dim
⇠ 1000
9. BISPECTRUM ESTIMATION
Fast and general bispectrum estimator for N-body simulations
MS, Regan, Shellard 1207.5678
Measure ~100 fNL amplitudes of
separable basis shapes, combine them to
reconstruct the full bispectrum
Scales like 100xN3 instead of N6, where
N~1000 (speedup by factor ~107)
Can estimate bispectrum whenever
power spectrum is typically measured
Validated against PT at high z
Useful compression to ~100 numbers
Automatically includes all triangles
Loss of total S/N due to truncation of
basis is only a few percent
(could be improved with larger basis; for ~N3
basis functions the estimator would be exact)
Theory
Expansion
of theory
N-body
Gravity Excess bispectrum
for local NG
z=30
10. BISPECTRUM ESTIMATION: MATHS
Fergusson, Shellard et al. 2009-2012
MS, Regan, Shellard 1207.5678
Likelihood for fNL given a density perturbation δk
Maximise w.r.t. fNL (given a theoretical bispectrum )
Requires ~N6 operations in general, but only ~N3 operations if was separable
➟ Measure amplitudes of separable basis functions
➟ Combine them to reconstruct full bispectrum from the data:
L /
Z
d3
k1
(2⇡)3
d3
k2
(2⇡)3
d3
k3
(2⇡)3
2
6
41
1
6
h k1 k2 k3
i
| {z }
/fNLB
@
@ k1
@
@ k2
@
@ k3
+ · · ·
3
7
5
1
p
det C
Y
ij
e
1
2
⇤
ki
(C 1
)ij kj
/ P
ˆf
Btheo
NL =
1
NfNL
Z
d3
k1
(2⇡)3
d3
k2
(2⇡)3
d3
k3
(2⇡)3
(2⇡)3
D(k1 + k2 + k3)Btheo
(k1, k2, k3) k1 k2 k3
P (k1)P (k2)P (k3)
Btheo
Btheo
where , ,R
n ⌘
X
m
nm
Q
m Mr(x) ⌘
Z
d3
k
(2⇡)3
eikx qr(k) obs
k
p
kP (k)
Q
m ⌘
Z
d3
xMr(x)Ms(x)Mt(x)
ˆB(k1, k2, k3) =
X
n
R
n
s
P (k1)P (k2)P (k3)
k1k2k3
Rn(k1, k2, k3)
depends on data
12. MS, Regan, Shellard 1207.5678
3 realisations of 5123 particles in a L = 1600 Mpc/h box with zinit = 49 and k = 0.004 - 0.5 h/Mpc
Plot S/N weighted bispectrum
p
k1k2k3B (k1, k2, k3)/
p
P (k1)P (k2)P (k3)
z=30 Tree level theory N-body
GAUSSIAN SIMULATIONS
COMPARISON WITH TREE LEVEL
13. MS, Regan, Shellard 1207.5678
3 realisations of 5123 particles in a L = 1600 Mpc/h box with zinit = 49 and k = 0.004 - 0.5 h/Mpc
Plot S/N weighted bispectrum
p
k1k2k3B (k1, k2, k3)/
p
P (k1)P (k2)P (k3)
Tree level theory N-body
z=2
GAUSSIAN SIMULATIONS
COMPARISON WITH TREE LEVEL
14. MS, Regan, Shellard 1207.5678
3 realisations of 5123 particles in a L = 1600 Mpc/h box with zinit = 49 and k = 0.004 - 0.5 h/Mpc
Plot S/N weighted bispectrum
p
k1k2k3B (k1, k2, k3)/
p
P (k1)P (k2)P (k3)
Tree level theory N-body
z=0
GAUSSIAN SIMULATIONS
COMPARISON WITH TREE LEVEL
15. GAUSSIAN SIMULATIONS
ON SMALL SCALES
MS, Regan, Shellard 1207.5678
3 realisations of 5123 particles in a L = 400 Mpc/h box with zinit = 49 and k = 0.016 - 2.0 h/Mpc
Plot S/N weighted bispectrum
p
k1k2k3B (k1, k2, k3)/
p
P (k1)P (k2)P (k3)
N-body N-body N-body
z=1 z=0z=2
16. MS, Regan, Shellard 1207.5678
DM density
GAUSSIAN SIMULATIONS
COMPARISON WITH DARK MATTER
5123 particles in a L = 400 Mpc/h box with zinit = 49 and k = 0.016 - 2 h/Mpc
Plot DM density in (40 Mpc/h)3 subbox and bispectrum signal
p
k1k2k3B (k1, k2, k3)/
p
P (k1)P (k2)P (k3)
DM bispectrum
(a) Dark matter, z = 4 (b) Bispectrum signal, z = 4
40 Mpc/h
40
M
pc/h
(a) Dark matter, z = 4 (b) Bispectrum signal, z = 4
z=4
17. MS, Regan, Shellard 1207.5678
DM density
GAUSSIAN SIMULATIONS
COMPARISON WITH DARK MATTER
5123 particles in a L = 400 Mpc/h box with zinit = 49 and k = 0.016 - 2 h/Mpc
Plot DM density in (40 Mpc/h)3 subbox and bispectrum signal
p
k1k2k3B (k1, k2, k3)/
p
P (k1)P (k2)P (k3)
DM bispectrum
(a) Dark matter, z = 4 (b) Bispectrum signal, z = 4
(c) Dark matter, z = 2 (d) Bispectrum signal, z = 2
40 Mpc/h
40
M
pc/h
(a) Dark matter, z = 4 (b) Bispectrum signal, z = 4
(c) Dark matter, z = 2 (d) Bispectrum signal, z = 2
z=2
18. MS, Regan, Shellard 1207.5678
DM density
GAUSSIAN SIMULATIONS
COMPARISON WITH DARK MATTER
5123 particles in a L = 400 Mpc/h box with zinit = 49 and k = 0.016 - 2 h/Mpc
Plot DM density in (40 Mpc/h)3 subbox and bispectrum signal
p
k1k2k3B (k1, k2, k3)/
p
P (k1)P (k2)P (k3)
(c) Dark matter, z = 2 (d) Bispectrum signal, z = 2
(e) Dark matter, z = 0 (f) Bispectrum signal, z = 0
. Left: Dark matter distribution in a (40Mpc/h)3 subbox of one of the G512
400 simulations at redshifts z = 4, 2 and
Right: Measured (signal to noise weighted) bispectrum in the range 0.016h/Mpc k 2h/Mpc, averaged over t
40 Mpc/h
40
M
pc/h
(c) Dark matter, z = 2 (d) Bispectrum signal, z = 2
(e) Dark matter, z = 0 (f) Bispectrum signal, z = 0
z=0
DM bispectrum
19. GAUSSIAN SIMULATIONS
SUMMARY
Summary Gaussian N-body simulations
z=4
z=0
DM density DM bispectrum
signal
pancakes
enhanced
equilateral
filaments
& clusters
flattened
MS, Regan, Shellard 1207.5678
Measured gravitational DM bispectrum
for all triangles down to k=2hMpc-1
Non-linearities mainly enhance
‘constant’ 1-halo bispectrum
Bispectrum characterises 3d DM
structures like pancakes, filaments,
clusters
Self-similarity
(constant contribution appears towards late
times at fixed length scale, and towards small
scales at fixed time)
21. Tree level theory N-body
3 realisations of 5123 particles in a L = 1600 Mpc/h box with zinit = 49 and k = 0.004-0.5 h/Mpc, fNL=10
Plot S/N weighted excess bispectrum
p
k1k2k3B (k1, k2, k3)/
p
P (k1)P (k2)P (k3)
z=30
LOCAL NON-GAUSSIAN SIMS
MS, Regan, Shellard 1207.5678
22. Tree level theory N-body
3 realisations of 5123 particles in a L = 1600 Mpc/h box with zinit = 49 and k = 0.004-0.5 h/Mpc, fNL=10
Plot S/N weighted excess bispectrum
p
k1k2k3B (k1, k2, k3)/
p
P (k1)P (k2)P (k3)
z=2
LOCAL NON-GAUSSIAN SIMS
MS, Regan, Shellard 1207.5678
23. Tree level theory N-body
3 realisations of 5123 particles in a L = 1600 Mpc/h box with zinit = 49 and k = 0.004-0.5 h/Mpc, fNL=10
Plot S/N weighted excess bispectrum
p
k1k2k3B (k1, k2, k3)/
p
P (k1)P (k2)P (k3)
z=0
LOCAL NON-GAUSSIAN SIMS
MS, Regan, Shellard 1207.5678
24. OTHER SHAPES
Excess DM bispectra for other non-Gaussian initial conditions
multiple fields higher derivative operators non-standard vacuum
MS, Regan, Shellard 1207.5678
z=0, kmax=0.5hMpc-1, 5123 particles
Non-linear regime:
Tree level shape is enhanced by non-linear power spectrum
Additional ~constant contribution to bispectrum signal
Quantitative characterisation with cumulative S/N and 3d shape correlations in 1207.5678
multiple fields
(local)
higher derivatives
(equilateral)
non-standard vacuum
(flattened)
25. Introduce scalar product , shape correlation and normC
hBi, Bjiest ⌘
V
⇡
Z
VB
dk1dk2dk3
k1k2k3Bi(k1, k2, k3)Bj(k1, k2, k3)
P (k1)P (k2)P (k3)
h·, ·iest
C(Bi, Bj) ⌘
hBi, Bjiest
p
hBi, BiiesthBj, Bjiest
2 [ 1, 1]
if then
estimator for cannot
find any and vice versa
Babich et al. 2004, Fergusson, Regan, Shellard 2010
||B|| ⌘
p
hB, Biest
||B||
total integrated S/N
|C(B1, B2)| ⌧ 1
B1
B2
) h ˆfBtheo
NL i = C(Bdata
, Btheo
)
||Bdata
||
||Btheo||
projection of
data on theory
SIMILARITY OF SHAPES
26. flocal
NL = 10
Btheo
local
ˆBNG
{
NG initial
conditions with
10
20
30
40
floc
NL
k ∈ [0.0039, 0.5]h/Mpc
k ∈ [0.0039, 0.25]h/Mpc
k ∈ [0.0039, 0.13]h/Mpc
0
50
100
¯F
ˆBNG
NL
1
10
||ˆBNG||
0.6
0.8
1
Cα ,th≥ 0.9613
Cβ,α
1 10
0.6
0.8
1
CNG,G
1 + z
ˆflocal
NL
time
z=49today
C( ˆBNG, Blocal)
Btheo
local
ˆBNG
Btheo
local
ˆBNG
}
C( ˆBNG, ˆBGauss)
ˆBNG
ˆBGauss
||B|| / ¯FNL
MS, Regan, Shellard 1207.5678
27. TIME SHIFT MODEL
Time shift model
Non-Gaussian universe evolves slightly faster
(or slower) than Gaussian universe
Halos form earlier (later) in presence of
primordial non-Gaussianity
Primordial non-Gaussianity gives growth of the
1-halo bispectrum a ‘headstart’ (or delay)
1-halo
bispectrum
at z
1-halo
bispectrum
at z+Δz
Gaussian universeNon-Gaussian universe
=
MS, Regan, Shellard 1207.5678
Motivates simple form of non-Gaussian excess bispectrum:
BNG
=
r
P NL
(k1)P NL
(k2)P NL
(k3)
P (k1)P (k2)P (k3) B (k1, k2, k3)
| {z }
perturbative with non-linear power
+c z@z[Dnh
(z)](k1 + k2 + k3)⌫
| {z }
time shifted 1-halo term
See Valageas for similar philosophy in
Gaussian case (combing PT+halo model)
28. FITTING FORMULAE
Simple fitting formulae for grav. and primordial DM bispectrum shapes
Valid at 0 ≤ z ≤ 20, k ≤ 2hMpc-1
3d shape correlation with measured shapes is ≥ 94.4% at 0 ≤ z ≤ 20 and ≥ 98 % at z=0
(≥ 99.8% for gravity at 0 ≤ z ≤ 20)
Only ~3 free parameters per inflation model (local, equilateral, flattened, [orthogonal])
MS, Regan, Shellard 1207.5678
Overall amplitude needs
to be rescaled by (poorly
understood) time-
dependent prefactor;
extends Gil-Marin et al
formula to smaller scales
and NG ICs
⌫ ⇡ 1.7
BNG
=
r
P NL
(k1)P NL
(k2)P NL
(k3)
P (k1)P (k2)P (k3) B (k1, k2, k3)
| {z }
perturbative with non-linear power
+c z@z[Dnh
(z)](k1 + k2 + k3)⌫
| {z }
time shifted 1-halo term
10
0
10
1
10
1
10
2
10
3
10
4
1 + z
kBk
(a) Gaussian, kmax = 0.5h/Mpc
10
0
10
1
10
2
10
3
10
4
10
5
1 + z
kBk
(b) Gaussian, kmax = 2h/Mpc
Figure 15. Motivation for using the growth function ¯D in the simple fitting formula (64). The arbitrary weight w(z) in Bo
Bgrav
,NL + w(z)(k1 + k2 + k3)⌫ is determined analytically such that C( ˆB , Bopt
) is maximal (for ⌫ = 1.7). We plot kBgrav
,NLk (black do
kw(z)(k1 + k2 + k3)⌫ k (green) and Bgrav
,const (black dashed) as defined in (14) with fitting parameters given in Table IV, illustrating
w(z) = c1
¯Dnh (z) is a good approximation. The continuous black and red curves show kBfitk from (64) and the estimated bispectrum
k ˆB k, respectively. The overall normalisation can be adjusted with Nfit as explained in the main text.
Simulation L[Mpc
h
] c1,2 n
(prim)
h min
z20
C ,↵ C ,↵(z=0)
G512g 1600 4.1 ⇥ 106
7 99.8% 99.8%
Loc10 1600 2 ⇥ 103
6 99.7% 99.8%
Eq100 1600 8.6 ⇥ 102
6 97.9% 99.4%
Flat10 1600 1.2 ⇥ 104
6 98.8% 98.9%
Orth100 1600 3.1 ⇥ 102
5.5 91.0% 91.0%
G512
400 400 1.0 ⇥ 107
8 99.8% 99.8%
Loc10512
400 400 2 ⇥ 103
dD/da 7 98.2% 99.0%
Eq100512
400 400 8.6⇥102
dD/da 7 94.4% 97.9%
Flat10512
400 400 1.2⇥104
dD/da 7 97.7% 99.1%
Orth100512
400 400 2.6 ⇥ 102
6.5 97.3% 98.9%
Table IV. Fitting parameters c1 and nh for the fit (64) of
the matter bispectrum for Gaussian initial conditions (sim-
which is shown by the dotted line in the lower pane
Fig. 16. While it varies with redshift between 0.7
1.4 for kmax = 2h/Mpc, it deviates by at most 8%
unity for kmax = 0.5h/Mpc. The lower panels also
the measured integrated bispectrum size k ˆBk and
two individual contributions to (64) when the norma
tion factor Nfit is included. These quantities are div
by kBgrav
,NLk for convenience. At high redshifts the
bispectrum size is essentially given by the contribu
from Bgrav
,NL, which equals the tree level prediction fo
gravitational bispectrum in this regime. The cont
tion from Bconst
dominates at z 2 for kmax = 2h/
when filamentary and spherical nonlinear structure
apparent. A similar transition can be seen at later t
on larger scales in Fig. 16a, indicating self-similar
Quality of fit:
all z z=0
10
0
10
1
10
1
10
2
10
3
10
4
1 + z
kBk
(a) Gaussian, kmax = 0.5h/Mpc
10
0
10
1
10
2
10
3
10
4
10
5
1 + z
kBk
(b) Gaussian, kmax = 2h/Mpc
Figure 15. Motivation for using the growth function ¯D in the simple fitting formula (64). The arbitrary weight w(z) in B
Bgrav
,NL + w(z)(k1 + k2 + k3)⌫ is determined analytically such that C( ˆB , Bopt
) is maximal (for ⌫ = 1.7). We plot kBgrav
,NLk (black do
kw(z)(k1 + k2 + k3)⌫ k (green) and Bgrav
,const (black dashed) as defined in (14) with fitting parameters given in Table IV, illustrating
w(z) = c1
¯Dnh (z) is a good approximation. The continuous black and red curves show kBfitk from (64) and the estimated bispectrum
k ˆB k, respectively. The overall normalisation can be adjusted with Nfit as explained in the main text.
Simulation L[Mpc
h
] c1,2 n
(prim)
h min
z20
C ,↵ C ,↵(z=0)
G512g 1600 4.1 ⇥ 106
7 99.8% 99.8%
Loc10 1600 2 ⇥ 103
6 99.7% 99.8%
Eq100 1600 8.6 ⇥ 102
6 97.9% 99.4%
Flat10 1600 1.2 ⇥ 104
6 98.8% 98.9%
Orth100 1600 3.1 ⇥ 102
5.5 91.0% 91.0%
G512
400 400 1.0 ⇥ 107
8 99.8% 99.8%
Loc10512
400 400 2 ⇥ 103
dD/da 7 98.2% 99.0%
Eq100512
400 400 8.6⇥102
dD/da 7 94.4% 97.9%
Flat10512
400 400 1.2⇥104
dD/da 7 97.7% 99.1%
Orth100512
400 400 2.6 ⇥ 102
6.5 97.3% 98.9%
Table IV. Fitting parameters c1 and nh for the fit (64) of
the matter bispectrum for Gaussian initial conditions (sim-
ulations G512g and G512
400) and c2 and nprim
h for the fit (69)
which is shown by the dotted line in the lower pane
Fig. 16. While it varies with redshift between 0.7
1.4 for kmax = 2h/Mpc, it deviates by at most 8%
unity for kmax = 0.5h/Mpc. The lower panels also
the measured integrated bispectrum size k ˆBk and
two individual contributions to (64) when the norma
tion factor Nfit is included. These quantities are div
by kBgrav
,NLk for convenience. At high redshifts the
bispectrum size is essentially given by the contribu
from Bgrav
,NL, which equals the tree level prediction fo
gravitational bispectrum in this regime. The cont
tion from Bconst
dominates at z 2 for kmax = 2h/
when filamentary and spherical nonlinear structure
apparent. A similar transition can be seen at later t
on larger scales in Fig. 16a, indicating self-simila
haviour.
29. CONCLUSIONS
Conclusions
Efficient and general non-Gaussian N-body initial conditions
Fast estimation of full bispectrum
new standard diagnostic alongside power spectrum
Tracked time evolution of the DM bispectrum in a large suite of non-
Gaussian N-body simulations
Time shift model for effect of primordial non-Gaussianity
New fitting formulae for gravitational and primordial DM bispectra
See 1108.3813 (initial conditions)
1207.5678 (rest)
30. COMPARISON WITH OTHER ESTIMATORS
Brute force bispectrum estimator
➟ Slow (107 times slower for 10003 simulation)
Taking only subset of triangle configurations/orientations
➟ Non-optimal estimator: Throw away data
(instead, we compress theory domain, not data domain; and we quantify lost S/N~few %)
Binning
➟ Same as our estimator if basis functions are taken as top-hats
➟ But: Not known how much signal is lost with binning (our monomial basis looses few %)
Correlations between triangles and with power spectrum
➟ Potentially easier to get correlations for 100 than for full 3d bispectrum shape
Comparison with theory
➟ Easier with 100 than for full 3d bispectrum shape
Experimental realism:
➟ Need to try... (Planck started with ~7 bispectrum estimators, 3 survived)
R
n
R
n