This document describes a numerical renormalization group (NRG) computation of nuclear magnetic relaxation rates using a two-center basis approach. The NRG and Lanczos methods are used to calculate the spin-lattice relaxation rate 1/T1 as a function of temperature T and distance r from the impurity. Results show that 1/T1 dependence on T changes as the probe crosses the Kondo screening cloud radius rK, and the phase of low-energy Friedel oscillations also changes, indicating the Kondo screening cloud radius can be measured via NMR.
GW170817: Dawn of multi-messenger astronomy Amruta Jaodand
A bystander's view of how the detection of first neutron star merger seen by LIGO Scientific Collaboration and Virgo was followed up by electromagnetic observations. Timeline curated from discovery papers, twitter threads and GCN's
The document discusses the effects of the L29Q mutation in cardiac troponin C (cTnC) based on NMR spectroscopy studies. Key findings include:
1) NMR experiments found the L29Q mutation did not alter cTnC structure or calcium binding but decreased calcium sensitivity slightly and reduced binding to the N-terminal extension of cardiac troponin I.
2) Additional NMR studies produced varying results on the effects of L29Q on calcium sensitivity, with some finding a small increase and others no significant change.
3) NMR structure calculation determined the L29Q mutation did not significantly alter the structure of cTnC.
Nitrogen Chemistry in Disffuse Interstellar MediumPrince Tiwari
This the project presentation which I gave at the end of VSRP-TIFR programme. It summarizes the study of nitrogen chemistry in diffuse galactic cloud W49N with help of data from HIFI spectrometer on-board Herschel Space Observatory.
The document discusses event-by-event fluctuations in Hanbury-Brown–Twiss (HBT) radii measurements from heavy-ion collisions and how to characterize the distributions of these measurements. It presents formalisms for direct ensemble averages (DEAs) of HBT radii, which represent the true mean of the event-wise HBT distribution, and physical ensemble averages (PEAs), which are weighted averages. It then describes methods to estimate the DEA and its moments like variance from measurements of weighted averages across event sub-ensembles, allowing characterization of the underlying HBT distribution using only limited single-event information.
The document discusses gravitational waves and binary systems. It provides context on the history of gravitational wave detection, from Einstein's early work developing the theory of gravitational waves to Joseph Weber's pioneering efforts to detect them in the 1960s. It also summarizes the development of laser interferometer gravitational wave detectors by researchers in the US, Germany, Italy, and the UK beginning in the 1970s and 1980s. Key detections by LIGO and Virgo are noted, including GW150914 in 2015. Theoretical work on modeling gravitational waveforms from coalescing compact binaries is summarized, from early perturbative approaches to more recent analytical methods like the effective one-body formalism.
1) The document discusses using the effective one body (EOB) formalism to model gravitational wave templates for LIGO and LISA. It summarizes the number and type of templates used in LIGO's first two observing runs.
2) It also discusses using EOB, post-Newtonian theory, numerical relativity simulations, and quantum field theory to model gravitational wave emission from binary black hole and binary neutron star mergers across different mass ratio and velocity regimes.
3) The document focuses on recent work extending EOB models to higher post-Minkowskian orders and including the effects of spin and tidal interactions, with the goal of more accurate gravitational wave template modeling.
Alessandra Buonanno gave a lecture on the analytical and numerical relativity approaches used to model gravitational waveforms from inspiraling binary systems. She discussed how post-Newtonian theory, effective one body theory, and numerical relativity are used to approximately and exactly solve Einstein's field equations. She emphasized the crucial synergy between analytical and numerical relativity approaches to develop accurate gravitational waveform models like EOBNR and Phenom that have been used to infer astrophysics from LIGO/Virgo detections.
This document discusses various methods for detecting neutrinos. It is very difficult to detect neutrinos due to their weak interactions. The earliest detection was through inverse beta decay using a nuclear reactor. Later, the Sudbury Neutrino Observatory was able to detect neutrinos via different interactions in deuterium, providing evidence of neutrino flavor oscillations. Now, large detectors like IceCube are detecting high-energy neutrinos from astrophysical sources. Measuring the neutrino mass precisely remains challenging but various techniques using beta decay spectra provide upper limits.
GW170817: Dawn of multi-messenger astronomy Amruta Jaodand
A bystander's view of how the detection of first neutron star merger seen by LIGO Scientific Collaboration and Virgo was followed up by electromagnetic observations. Timeline curated from discovery papers, twitter threads and GCN's
The document discusses the effects of the L29Q mutation in cardiac troponin C (cTnC) based on NMR spectroscopy studies. Key findings include:
1) NMR experiments found the L29Q mutation did not alter cTnC structure or calcium binding but decreased calcium sensitivity slightly and reduced binding to the N-terminal extension of cardiac troponin I.
2) Additional NMR studies produced varying results on the effects of L29Q on calcium sensitivity, with some finding a small increase and others no significant change.
3) NMR structure calculation determined the L29Q mutation did not significantly alter the structure of cTnC.
Nitrogen Chemistry in Disffuse Interstellar MediumPrince Tiwari
This the project presentation which I gave at the end of VSRP-TIFR programme. It summarizes the study of nitrogen chemistry in diffuse galactic cloud W49N with help of data from HIFI spectrometer on-board Herschel Space Observatory.
The document discusses event-by-event fluctuations in Hanbury-Brown–Twiss (HBT) radii measurements from heavy-ion collisions and how to characterize the distributions of these measurements. It presents formalisms for direct ensemble averages (DEAs) of HBT radii, which represent the true mean of the event-wise HBT distribution, and physical ensemble averages (PEAs), which are weighted averages. It then describes methods to estimate the DEA and its moments like variance from measurements of weighted averages across event sub-ensembles, allowing characterization of the underlying HBT distribution using only limited single-event information.
The document discusses gravitational waves and binary systems. It provides context on the history of gravitational wave detection, from Einstein's early work developing the theory of gravitational waves to Joseph Weber's pioneering efforts to detect them in the 1960s. It also summarizes the development of laser interferometer gravitational wave detectors by researchers in the US, Germany, Italy, and the UK beginning in the 1970s and 1980s. Key detections by LIGO and Virgo are noted, including GW150914 in 2015. Theoretical work on modeling gravitational waveforms from coalescing compact binaries is summarized, from early perturbative approaches to more recent analytical methods like the effective one-body formalism.
1) The document discusses using the effective one body (EOB) formalism to model gravitational wave templates for LIGO and LISA. It summarizes the number and type of templates used in LIGO's first two observing runs.
2) It also discusses using EOB, post-Newtonian theory, numerical relativity simulations, and quantum field theory to model gravitational wave emission from binary black hole and binary neutron star mergers across different mass ratio and velocity regimes.
3) The document focuses on recent work extending EOB models to higher post-Minkowskian orders and including the effects of spin and tidal interactions, with the goal of more accurate gravitational wave template modeling.
Alessandra Buonanno gave a lecture on the analytical and numerical relativity approaches used to model gravitational waveforms from inspiraling binary systems. She discussed how post-Newtonian theory, effective one body theory, and numerical relativity are used to approximately and exactly solve Einstein's field equations. She emphasized the crucial synergy between analytical and numerical relativity approaches to develop accurate gravitational waveform models like EOBNR and Phenom that have been used to infer astrophysics from LIGO/Virgo detections.
This document discusses various methods for detecting neutrinos. It is very difficult to detect neutrinos due to their weak interactions. The earliest detection was through inverse beta decay using a nuclear reactor. Later, the Sudbury Neutrino Observatory was able to detect neutrinos via different interactions in deuterium, providing evidence of neutrino flavor oscillations. Now, large detectors like IceCube are detecting high-energy neutrinos from astrophysical sources. Measuring the neutrino mass precisely remains challenging but various techniques using beta decay spectra provide upper limits.
This document summarizes the EXO (Enriched Xenon Observatory) experiment which aims to search for neutrinoless double beta decay in 136Xe. It describes the EXO-200 detector which contains 200kg of xenon enriched to 80% 136Xe. The detector measures both ionization and scintillation signals to achieve high energy resolution. The document discusses the goals of EXO-200 to search for 0νββ decay, measure the 2νββ half-life, and understand operating a large liquid xenon detector. It also describes plans to identify barium daughters from double beta decays using laser spectroscopy to achieve a background-free experiment.
Pdpta11 G-Ensemble for Meteorological Prediction EnhancementHisham Ihshaish
This document discusses using an ensemble approach called Genetic Ensemble (G-Ensemble) to improve meteorological predictions. Meteorological models use initial conditions and physical parameterizations to predict variables over a domain at future time steps. However, predictions can be imperfect due to uncertainties in initial conditions and parameterizations. G-Ensemble aims to address this by running an ensemble of meteorological models with varied initial conditions and parameterizations to generate a collective prediction. The approach was tested on hurricane Katrina predictions, showing potential to improve forecast accuracy. Future work will further evaluate G-Ensemble on additional cases.
This document summarizes a lecture on using gravitational wave waveform models to test general relativity and probe the nature of compact objects through gravitational wave observations. It discusses how waveform models can be used to bound post-Newtonian coefficients, constrain phenomenological merger-ringdown parameters, and probe the quasi-normal modes of black hole ringdowns. Measuring multiple modes could verify the no-hair theorem and black hole uniqueness properties. Future observations from LIGO and Virgo at design sensitivity may allow high-precision black hole spectroscopy and tests of general relativity in the strong, dynamical gravity regime.
Prof Tom Trainor (University of Washington, Seattle, USA)Rene Kotze
TITLE: Two cultures in high energy nuclear physics
Since the mid eighties a community originating within the Bevalac program at the LBNL has sought to achieve formation of a color-deconfined quark-gluon plasma in heavy ion (A-A) collisions using successively higher collision energies at the AGS, SPS, RHIC and now the LHC, emphasizing a flowing dense "partonic" medium as the principal phenomenon. During much of the same period the high energy physics (HEP) community studying elementary collisions (e-e, e-p, p-p) developed the modern theory of QCD, emphasizing dijet production (fragmentation of scattered partons to observable hadrons) as the principal (calculable) phenomenon. Initially it was assumed that the QGP phenomenon in most-central A-A collisions might be distinguished from the HEP dijet phenomenon in elementary collisions. However, strong overlaps in phenomenology have revealed significant conflicts between QGP and HEP "cultures," especially at RHIC and LHC energies. In this talk I review some of the history and contrast an assortment of experimental evidence and interpretations from the two cultures with suggested conflict resolution.
The document discusses using gravitational wave waveform models to infer astrophysical properties from observations of gravitational wave events. It describes how waveform models encode information about binary black hole parameters like mass and spin, and how Bayesian inference can be used to estimate these parameters from the detected gravitational wave signal. It also addresses assessing confidence in detections and evaluating potential modeling systematics by comparing waveform models to numerical relativity simulations.
The document discusses creating new real-time scheduling policies. It introduces real-time systems and scheduling, and describes how scheduling policies are commonly defined using priority functions. Existing scheduling policies like earliest deadline first are discussed. The goal is to formally define new scheduling policies and analyze their performance, especially for improving control in networked control systems.
- The document discusses gravitational waves and binary systems, including perturbative computations of gravitational wave flux from binary systems up to order v7/c7.
- It covers the effective one body (EOB) method for modeling binary coalescence, including resummations of post-Newtonian results and the addition of ringdown effects. This provides the first complete waveforms for binary black hole coalescences.
- Developments are discussed such as extending EOB to include spinning bodies, comparisons to numerical relativity results, and using gravitational self-force calculations to improve EOB modeling.
Sensing Throughput Tradeoff for Cognitive Radio Networks with Noise Variance ...T. E. BOGALE
This document presents a method for sensing sub-bands in cognitive radio networks with uncertain noise variance. It proposes a new edge detector to detect the number of sub-bands in a wideband spectrum. It then identifies a reference white sub-band using average energy comparison. A test statistic is developed to sense the other sub-bands and optimize the sensing time. Simulation results show the detection probability of the proposed method and how it trades off sensing time with throughput. The method allows cognitive radios to maximize throughput while protecting primary users under uncertainty in noise variance.
This document summarizes a presentation on computational photochemistry given by Filipp Furche. The presentation covered non-adiabatic dynamics simulations using time-dependent density functional theory and surface hopping. It described applications of these methods to study photocatalytic water splitting by TiO2 clusters, vitamin D photochemistry, and acetaldehyde photodissociation. The presentation concluded that hybrid functionals are needed for accurate simulations and outlined plans to improve the methodology.
1. The document provides an overview of natural bond orbital (NBO) analysis, describing the hierarchical atomic and molecular orbitals in NBO analysis and the key outputs from an NBO calculation.
2. Natural atomic orbitals (NAOs) are derived by diagonalizing the localized block of the full density matrix to give orbitals with maximum occupancy and one-center character.
3. The NBO analysis examines natural bond orbitals (NBOs), bond orders, charge distributions, and second-order perturbation estimates of donor-acceptor interactions.
This document summarizes VLBI observations of supernova SN 2011dh made 14 days after its discovery, providing the earliest radio image of a supernova. The observations detected SN 2011dh at 22 GHz using a subset of the EVN array. The recovered flux density was approximately half the value measured by the EVLA at the same frequency and epoch, possibly due to extended emission or calibration issues. Precise coordinates for SN 2011dh were determined, linked to the ICRF, which may help improve future VLBI observations of the supernova.
1) Gravitational waves are predicted by Einstein's theory of general relativity and are generated by accelerating masses like binary star systems.
2) Modeling the motion and gravitational wave emission of compact binary systems like neutron stars and black holes requires using techniques like post-Newtonian theory, effective field theory approaches, and numerical relativity simulations.
3) Understanding strong gravitational fields like those near black holes requires tools from general relativity like multipolar expansions, matched asymptotic expansions, and analytic continuation techniques.
Parton distributions with QED corrections and LHC phenomenologyjuanrojochacon
The document discusses parton distribution functions (PDFs) that include quantum electrodynamics (QED) corrections. It summarizes the NNPDF2.3QED PDF set, which is the first to include next-to-next-to-leading order (NNLO) QCD and leading order (LO) QED effects. The photon PDF is directly constrained by LHC data for the first time. The PDF set improves constraints on the photon PDF from both DIS and LHC Drell-Yan data. It also discusses implications for LHC phenomenology from photon-initiated contributions.
1. The document discusses potential low frequency gravitational wave sources that could be detected by LISA, including galactic white dwarf binaries, massive black hole binaries, and extreme mass ratio inspirals.
2. LISA could detect thousands of massive black hole binaries and provide precise measurements of their parameters like mass and spin, enabling tests of general relativity and learning about black hole formation mechanisms.
3. Extreme mass ratio inspirals where a compact object spirals into a massive black hole could occur at a rate of 10-7 per year in our galaxy, allowing precision cosmology and tests of the no-hair theorem.
The document summarizes the NNPDF3.1 global analysis which provides an updated determination of parton distribution functions (PDFs) from experimental data. Key points include:
1) NNPDF3.1 includes new high-precision measurements from the LHC as well as NNLO QCD calculations, allowing more data to be included. It also fits the charm PDF rather than assuming it is purely perturbative.
2) The new data provides stronger constraints on PDFs, particularly the gluon and down quark, significantly reducing their uncertainties. It also shows good agreement with the previous NNPDF3.0 analysis.
3) For the first time, NNPDF3.1 includes LHC
The document summarizes measurements of the W+W- cross section in √s = 7 TeV pp collisions at the LHC by the ATLAS and CMS collaborations using integrated luminosities of 34-36 pb-1. Both collaborations observe event counts consistent with Standard Model expectations for W+W- production. The measured cross sections are ATLAS: 41-16+20 pb and CMS: 41.1±15.3(stat)±5.8(sys)±4.5(lumi) pb, consistent with the Standard Model prediction of 43.0 ± 2.0 pb. CMS also finds the ratio of W+W- to inclusive W production consistent with Standard Model expectations.
Support for the_thermal_origin_of_the_pioneer_anomalySérgio Sacani
The document presents a thermal model of the Pioneer 10 spacecraft constructed using finite-element analysis. The model incorporates design documentation and flight telemetry data to model the temperature distribution and thermal radiation across the spacecraft surfaces. It finds that the magnitude, temporal behavior, and direction of the thermal recoil force predicted by the model are similar to the properties of the observed Pioneer anomaly. Independent estimates of parameters characterizing the thermal recoil force from Doppler data analysis show no statistically significant difference from the thermal modeling results, suggesting the anomaly can be explained by thermal radiation without needing to invoke new physics.
This document discusses nonlinear optics and the dynamical Berry phase. It introduces nonlinear optics and summarizes early experiments. It then discusses how the Berry phase is related to nonlinear optical effects like second harmonic generation (SHG). Computational methods are presented for calculating SHG and other nonlinear optical properties from first principles using time-dependent density functional theory and the dynamical Berry phase. Examples of applying these methods to study SHG in semiconductors are provided.
The document provides an introduction to basic concepts in nuclear physics, including:
- Binding energy and the liquid drop model, which describes the saturation of nuclear forces.
- Nuclear dimensions and the different energy scales involved.
- The Fermi gas model, which treats nuclei as two fermion gases and can provide constants for binding energy formulas.
- The shell model, which incorporates a mean field potential and spin-orbit potential to reproduce shell structure in nuclei.
- Isospin, which treats protons and neutrons as states of a single particle to explain similarities in their behavior.
The document summarizes the history and current state of the post-Newtonian (PN) approximation for modeling compact binary systems and gravitational waves. It discusses how PN theory has achieved an "unreasonable accuracy" in describing binary pulsars and is now being used to construct initial data and compare waveforms for numerical relativity simulations.
Ion-acoustic rogue waves in multi-ion plasmasMehedi Hassan
This document summarizes a presentation on ion-acoustic rogue waves in multi-ion plasmas. The presentation includes:
1. An introduction to ion-acoustic waves in pair-ion plasma medium and the derivation of the nonlinear Schrodinger equation to model the system.
2. Analysis showing the modulational instability of ion-acoustic waves leads to the generation of rogue waves in unstable regions where the ratio P/Q is positive.
3. Results demonstrating how parameters like the non-thermal parameter, mass ratios of positive and negative ions, and temperatures of inertialess components affect the stable and unstable wave regions and properties of first and second order rogue waves.
Ab-initio real-time spectroscopy: application to non-linear opticsClaudio Attaccalite
This document discusses ab-initio real-time spectroscopy and its application to non-linear optics. It begins with an overview of non-linear optics and the polarization response. It then discusses using time-dependent density functional theory to calculate nonlinear optical properties in real-time by solving the time-dependent Schrodinger equation under an external electric field. Examples are given of calculating second and third harmonic generation in materials. The document also discusses approaches to address challenges like treating bulk polarization and including many-body effects.
This document summarizes the EXO (Enriched Xenon Observatory) experiment which aims to search for neutrinoless double beta decay in 136Xe. It describes the EXO-200 detector which contains 200kg of xenon enriched to 80% 136Xe. The detector measures both ionization and scintillation signals to achieve high energy resolution. The document discusses the goals of EXO-200 to search for 0νββ decay, measure the 2νββ half-life, and understand operating a large liquid xenon detector. It also describes plans to identify barium daughters from double beta decays using laser spectroscopy to achieve a background-free experiment.
Pdpta11 G-Ensemble for Meteorological Prediction EnhancementHisham Ihshaish
This document discusses using an ensemble approach called Genetic Ensemble (G-Ensemble) to improve meteorological predictions. Meteorological models use initial conditions and physical parameterizations to predict variables over a domain at future time steps. However, predictions can be imperfect due to uncertainties in initial conditions and parameterizations. G-Ensemble aims to address this by running an ensemble of meteorological models with varied initial conditions and parameterizations to generate a collective prediction. The approach was tested on hurricane Katrina predictions, showing potential to improve forecast accuracy. Future work will further evaluate G-Ensemble on additional cases.
This document summarizes a lecture on using gravitational wave waveform models to test general relativity and probe the nature of compact objects through gravitational wave observations. It discusses how waveform models can be used to bound post-Newtonian coefficients, constrain phenomenological merger-ringdown parameters, and probe the quasi-normal modes of black hole ringdowns. Measuring multiple modes could verify the no-hair theorem and black hole uniqueness properties. Future observations from LIGO and Virgo at design sensitivity may allow high-precision black hole spectroscopy and tests of general relativity in the strong, dynamical gravity regime.
Prof Tom Trainor (University of Washington, Seattle, USA)Rene Kotze
TITLE: Two cultures in high energy nuclear physics
Since the mid eighties a community originating within the Bevalac program at the LBNL has sought to achieve formation of a color-deconfined quark-gluon plasma in heavy ion (A-A) collisions using successively higher collision energies at the AGS, SPS, RHIC and now the LHC, emphasizing a flowing dense "partonic" medium as the principal phenomenon. During much of the same period the high energy physics (HEP) community studying elementary collisions (e-e, e-p, p-p) developed the modern theory of QCD, emphasizing dijet production (fragmentation of scattered partons to observable hadrons) as the principal (calculable) phenomenon. Initially it was assumed that the QGP phenomenon in most-central A-A collisions might be distinguished from the HEP dijet phenomenon in elementary collisions. However, strong overlaps in phenomenology have revealed significant conflicts between QGP and HEP "cultures," especially at RHIC and LHC energies. In this talk I review some of the history and contrast an assortment of experimental evidence and interpretations from the two cultures with suggested conflict resolution.
The document discusses using gravitational wave waveform models to infer astrophysical properties from observations of gravitational wave events. It describes how waveform models encode information about binary black hole parameters like mass and spin, and how Bayesian inference can be used to estimate these parameters from the detected gravitational wave signal. It also addresses assessing confidence in detections and evaluating potential modeling systematics by comparing waveform models to numerical relativity simulations.
The document discusses creating new real-time scheduling policies. It introduces real-time systems and scheduling, and describes how scheduling policies are commonly defined using priority functions. Existing scheduling policies like earliest deadline first are discussed. The goal is to formally define new scheduling policies and analyze their performance, especially for improving control in networked control systems.
- The document discusses gravitational waves and binary systems, including perturbative computations of gravitational wave flux from binary systems up to order v7/c7.
- It covers the effective one body (EOB) method for modeling binary coalescence, including resummations of post-Newtonian results and the addition of ringdown effects. This provides the first complete waveforms for binary black hole coalescences.
- Developments are discussed such as extending EOB to include spinning bodies, comparisons to numerical relativity results, and using gravitational self-force calculations to improve EOB modeling.
Sensing Throughput Tradeoff for Cognitive Radio Networks with Noise Variance ...T. E. BOGALE
This document presents a method for sensing sub-bands in cognitive radio networks with uncertain noise variance. It proposes a new edge detector to detect the number of sub-bands in a wideband spectrum. It then identifies a reference white sub-band using average energy comparison. A test statistic is developed to sense the other sub-bands and optimize the sensing time. Simulation results show the detection probability of the proposed method and how it trades off sensing time with throughput. The method allows cognitive radios to maximize throughput while protecting primary users under uncertainty in noise variance.
This document summarizes a presentation on computational photochemistry given by Filipp Furche. The presentation covered non-adiabatic dynamics simulations using time-dependent density functional theory and surface hopping. It described applications of these methods to study photocatalytic water splitting by TiO2 clusters, vitamin D photochemistry, and acetaldehyde photodissociation. The presentation concluded that hybrid functionals are needed for accurate simulations and outlined plans to improve the methodology.
1. The document provides an overview of natural bond orbital (NBO) analysis, describing the hierarchical atomic and molecular orbitals in NBO analysis and the key outputs from an NBO calculation.
2. Natural atomic orbitals (NAOs) are derived by diagonalizing the localized block of the full density matrix to give orbitals with maximum occupancy and one-center character.
3. The NBO analysis examines natural bond orbitals (NBOs), bond orders, charge distributions, and second-order perturbation estimates of donor-acceptor interactions.
This document summarizes VLBI observations of supernova SN 2011dh made 14 days after its discovery, providing the earliest radio image of a supernova. The observations detected SN 2011dh at 22 GHz using a subset of the EVN array. The recovered flux density was approximately half the value measured by the EVLA at the same frequency and epoch, possibly due to extended emission or calibration issues. Precise coordinates for SN 2011dh were determined, linked to the ICRF, which may help improve future VLBI observations of the supernova.
1) Gravitational waves are predicted by Einstein's theory of general relativity and are generated by accelerating masses like binary star systems.
2) Modeling the motion and gravitational wave emission of compact binary systems like neutron stars and black holes requires using techniques like post-Newtonian theory, effective field theory approaches, and numerical relativity simulations.
3) Understanding strong gravitational fields like those near black holes requires tools from general relativity like multipolar expansions, matched asymptotic expansions, and analytic continuation techniques.
Parton distributions with QED corrections and LHC phenomenologyjuanrojochacon
The document discusses parton distribution functions (PDFs) that include quantum electrodynamics (QED) corrections. It summarizes the NNPDF2.3QED PDF set, which is the first to include next-to-next-to-leading order (NNLO) QCD and leading order (LO) QED effects. The photon PDF is directly constrained by LHC data for the first time. The PDF set improves constraints on the photon PDF from both DIS and LHC Drell-Yan data. It also discusses implications for LHC phenomenology from photon-initiated contributions.
1. The document discusses potential low frequency gravitational wave sources that could be detected by LISA, including galactic white dwarf binaries, massive black hole binaries, and extreme mass ratio inspirals.
2. LISA could detect thousands of massive black hole binaries and provide precise measurements of their parameters like mass and spin, enabling tests of general relativity and learning about black hole formation mechanisms.
3. Extreme mass ratio inspirals where a compact object spirals into a massive black hole could occur at a rate of 10-7 per year in our galaxy, allowing precision cosmology and tests of the no-hair theorem.
The document summarizes the NNPDF3.1 global analysis which provides an updated determination of parton distribution functions (PDFs) from experimental data. Key points include:
1) NNPDF3.1 includes new high-precision measurements from the LHC as well as NNLO QCD calculations, allowing more data to be included. It also fits the charm PDF rather than assuming it is purely perturbative.
2) The new data provides stronger constraints on PDFs, particularly the gluon and down quark, significantly reducing their uncertainties. It also shows good agreement with the previous NNPDF3.0 analysis.
3) For the first time, NNPDF3.1 includes LHC
The document summarizes measurements of the W+W- cross section in √s = 7 TeV pp collisions at the LHC by the ATLAS and CMS collaborations using integrated luminosities of 34-36 pb-1. Both collaborations observe event counts consistent with Standard Model expectations for W+W- production. The measured cross sections are ATLAS: 41-16+20 pb and CMS: 41.1±15.3(stat)±5.8(sys)±4.5(lumi) pb, consistent with the Standard Model prediction of 43.0 ± 2.0 pb. CMS also finds the ratio of W+W- to inclusive W production consistent with Standard Model expectations.
Support for the_thermal_origin_of_the_pioneer_anomalySérgio Sacani
The document presents a thermal model of the Pioneer 10 spacecraft constructed using finite-element analysis. The model incorporates design documentation and flight telemetry data to model the temperature distribution and thermal radiation across the spacecraft surfaces. It finds that the magnitude, temporal behavior, and direction of the thermal recoil force predicted by the model are similar to the properties of the observed Pioneer anomaly. Independent estimates of parameters characterizing the thermal recoil force from Doppler data analysis show no statistically significant difference from the thermal modeling results, suggesting the anomaly can be explained by thermal radiation without needing to invoke new physics.
This document discusses nonlinear optics and the dynamical Berry phase. It introduces nonlinear optics and summarizes early experiments. It then discusses how the Berry phase is related to nonlinear optical effects like second harmonic generation (SHG). Computational methods are presented for calculating SHG and other nonlinear optical properties from first principles using time-dependent density functional theory and the dynamical Berry phase. Examples of applying these methods to study SHG in semiconductors are provided.
The document provides an introduction to basic concepts in nuclear physics, including:
- Binding energy and the liquid drop model, which describes the saturation of nuclear forces.
- Nuclear dimensions and the different energy scales involved.
- The Fermi gas model, which treats nuclei as two fermion gases and can provide constants for binding energy formulas.
- The shell model, which incorporates a mean field potential and spin-orbit potential to reproduce shell structure in nuclei.
- Isospin, which treats protons and neutrons as states of a single particle to explain similarities in their behavior.
The document summarizes the history and current state of the post-Newtonian (PN) approximation for modeling compact binary systems and gravitational waves. It discusses how PN theory has achieved an "unreasonable accuracy" in describing binary pulsars and is now being used to construct initial data and compare waveforms for numerical relativity simulations.
Ion-acoustic rogue waves in multi-ion plasmasMehedi Hassan
This document summarizes a presentation on ion-acoustic rogue waves in multi-ion plasmas. The presentation includes:
1. An introduction to ion-acoustic waves in pair-ion plasma medium and the derivation of the nonlinear Schrodinger equation to model the system.
2. Analysis showing the modulational instability of ion-acoustic waves leads to the generation of rogue waves in unstable regions where the ratio P/Q is positive.
3. Results demonstrating how parameters like the non-thermal parameter, mass ratios of positive and negative ions, and temperatures of inertialess components affect the stable and unstable wave regions and properties of first and second order rogue waves.
Ab-initio real-time spectroscopy: application to non-linear opticsClaudio Attaccalite
This document discusses ab-initio real-time spectroscopy and its application to non-linear optics. It begins with an overview of non-linear optics and the polarization response. It then discusses using time-dependent density functional theory to calculate nonlinear optical properties in real-time by solving the time-dependent Schrodinger equation under an external electric field. Examples are given of calculating second and third harmonic generation in materials. The document also discusses approaches to address challenges like treating bulk polarization and including many-body effects.
UCSD NANO 266 Quantum Mechanical Modelling of Materials and Nanostructures is a graduate class that provides students with a highly practical introduction to the application of first principles quantum mechanical simulations to model, understand and predict the properties of materials and nano-structures. The syllabus includes: a brief introduction to quantum mechanics and the Hartree-Fock and density functional theory (DFT) formulations; practical simulation considerations such as convergence, selection of the appropriate functional and parameters; interpretation of the results from simulations, including the limits of accuracy of each method. Several lab sessions provide students with hands-on experience in the conduct of simulations. A key aspect of the course is in the use of programming to facilitate calculations and analysis.
The document summarizes research on finding electromagnetic counterparts to gravitational wave sources detected by LIGO and Virgo. It discusses that neutron star mergers are a promising source of both gravitational waves and short gamma-ray bursts. Numerical simulations show neutron star mergers produce neutron-rich debris ejected at high velocities, which could power a luminous "kilonova" lasting several days. Future wide-field optical surveys like LSST could detect such kilonova emissions from neutron star mergers within the gravitational wave detection range of advanced LIGO and Virgo, helping associate gravitational wave sources with electromagnetic events.
Quantum force sensing with optomechanical transducersOndrej Cernotik
Optomechanical force sensing is an established measurement technique that can reach remarkable precision. In most applications, the system exerting the force on the mechanical oscillator is treated classically and we are not interested in any coherence between states of the system that give rise to different forces. A full quantum treatment, however, enables richer physics since measuring more such systems can lead to interference effects.
In this talk, I will show that the coherence can survive the measurement and can be used for quantum-technological applications. I will consider a model example of spin readout in superconducting qubits. Coupling two transmon qubits to mechanical oscillators and reading out the mechanical positions using a single beam of light provides information on the total spin of the qubits. It is thus possible to conditionally generate entanglement between the two qubits. The system represents a basic quantum network with superconducting circuits. The scheme has modest requirements on the system parameters; it does not require ground-state cooling or resolved-sideband regime and can work with quantum cooperativity moderately larger than unity.
Afterwards, I will consider another scheme, namely nondestructive detection of a single photon using an optomechanical transducer. The basic idea is similar to spin readout; the photon exerts a force on a mechanical oscillator and the the force is measured optically. I will argue that such a measurement is subject to a quantum limit due to backaction of the transducer on the dynamics of the photon and that this result also applies to other techniques of nondestructive photon detection, such as methods using Kerr interaction between the single photon and a meter beam. Finally, I will show numerically that measurement backaction can be evaded when the measurement rate is suitably modulated.
Semi-empirical Monte Carlo optical-gain modelling of Nuclear Imaging scintill...Anax Fotopoulos
The document describes a semi-empirical Monte Carlo model to estimate the optical gain (DOG) of single crystal scintillators excited by gamma rays. The model divides the crystal into layers, uses EGSnrc to simulate gamma ray absorption, and combines this with an analytical model of optical photon propagation between layers. The model is validated against experimental data for LSO:Ce, GSO:Ce and YAP:Ce crystals at 140keV and 364keV. Results show the model can predict DOG values and determine an optimum crystal thickness for different gamma ray energies.
Experimental Neutrino Physics Concepts in Nutshell Son Cao
This document provides an overview of the key steps involved in designing and conducting neutrino oscillation experiments. It discusses how experimental neutrino physicists formulate hypotheses about neutrino oscillations, design experiments to test these hypotheses, build and operate detectors to collect data, and make statements based on the observed data. Specific examples from T2K and NOvA are used to illustrate how these experiments addressed challenges like creating neutrino beams, choosing detector locations, and identifying electron neutrinos emerging from muon neutrino beams. The document aims to provide theoretical physics students a practical guide for thinking about neutrino experiments.
- Optical spectra can be efficiently calculated using Green's function theory and the Bethe-Salpeter equation (BSE) or time-dependent density functional theory (TDDFT) formulated in the electron-hole space.
- The Lanczos-Haydock approach can be used to solve the BSE and TDDFT equations without fully diagonalizing the large matrices, greatly improving computational efficiency.
- While TDDFT provides a lower-cost approximation, the BSE more fully accounts for electron-hole interactions and is less prone to breakdowns like those from the Tamm-Dancoff approximation.
Yet another statistical analysis of the data of the ‘loophole free’ experime...Richard Gill
I presented novel statistical analyses of the data of the famous Bell-inequality experiments of 2015 and 2016: Delft, NIST, Vienna and Munich. Every statistical analysis relies on statistical assumptions. I’ll make the traditional, but questionable, i.i.d. assumptions. They justify a novel (?) analysis which is both simple and (close to) optimal.
It enables us to fairly compare the results of the two main types of experiments: NIST and Vienna CH-Eberhard “one-channel” experiment with target settings and state chosen to optimise the handling of the detection loophole (detector efficiency > 66.7%); Delft and Munich CHSH “two channel” experiments based on entanglement swapping, with the target state and settings which achieve the Tsirelson bound (detector efficiency ≈ 100%).
One cannot say which type of experiment is better without agreeing on how to compromise between the desires to obtain high statistical significance and high physical significance. Moreover, robustness to deviations from traditional assumptions is also an issue.
I also discussed my current opinions on the question: what should we now believe about locality and realism and the foundations of quantum mechanics. My provisional conclusion is "exquisite/angelic spukhafte Fernwerkung" ... but tempered with a quantum Buddhist point of view - nothing is real. This was a talk at the 2019 Växjö conference QIRIF
Pairing and Symmetries in Nuclear MatterAlex Quadros
This document discusses nuclear pairing phenomena using a Dirac-Hartree-Fock-Bogolyubov approach. Key findings include:
1) Nuclear pairing occurs when nucleons within nuclei or neutron stars form strongly correlated pairs at short distances. This includes standard proton-proton and neutron-neutron pairing.
2) The Dirac-Hartree-Fock-Bogolyubov approach provides a relativistic description of nuclear matter using mean fields and particle-hole and particle-particle transformations.
3) Symmetries of the Dirac-Hartree-Fock-Bogolyubov Hamiltonian, including isospin symmetry, allow certain unitary transformations that can diagonalize the Hamiltonian.
I gave 1 hour seminar at ANSTO (Australian Nuclear Science and Technology Organization) to introduce my approach to magnetism. I see myself as an experimental physicist who is studying magnetism by using neutron scattering techniques. Throughout my career, I had learned local structure analysis (PDF), magnetic structural analysis, and inelastic neutron scattering technique to investigate superconductor, multiferroics, antiferromagnets, helimagnets, and frustrated magnets. I was trying to explain my approach to magnetism as an experiment physicist to both professional scientists and novices.
The document discusses room temperature superconductivity and summarizes past research on high temperature superconductivity. It also examines the electronic structure and superconducting properties of NaxCoO2. Specifically:
1) Prior research from the 1970s explored whether high temperature superconductivity was possible.
2) Calculations and experiments on NaxCoO2 suggest it has unusual magnetic and electronic properties, including spin fluctuations that may be important to superconductivity.
3) Studies of the superconducting state in NaxCoO2 indicate it is unconventional and not fully gapped, with triplet f-wave pairing being a leading hypothesis to explain experimental results.
Puzzling pairing in the non-centrosymmetric superconductor LaNiC2Jorge Quintanilla
The document summarizes research into the puzzling pairing in the non-centrosymmetric superconductor LaNiC2. Muon spin relaxation experiments showed the superconducting state breaks time-reversal symmetry through spontaneous quasi-static fields. Theoretically, this implies non-unitary triplet pairing with weak spin-orbit coupling. The transition may split into two stages due to the influence of spin-orbit coupling on relativistic and non-relativistic instabilities. While experimental evidence points to time-reversal symmetry breaking, the specific pairing symmetry is still unknown, as is the reason for non-unitary pairing. The research highlights that noncentrosymmetric superconductors cannot be fully understood through Rashba coupling alone and uncon
Puzzling pairing in thenon-centrosymmetric superconductor LaNiC2Jorge Quintanilla
The document summarizes research into the puzzling pairing in the non-centrosymmetric superconductor LaNiC2. MuSR experiments showed the superconducting state breaks time-reversal symmetry through spontaneous magnetic fields. Theoretically, this implies non-unitary triplet pairing with weak spin-orbit coupling. The transition may split into two stages due to the influence of spin-orbit coupling on relativistic and non-relativistic instabilities. While experimental evidence points to time-reversal symmetry breaking, the specific pairing symmetry is still unknown, as is why the pairing is non-unitary. The research highlights that noncentrosymmetric superconductors cannot be fully understood through Rashba coupling alone and unconventional pairing extends
The document discusses general relativistic N-body simulations of cosmic large-scale structure using a code called gevolution. Gevolution computes the six metric degrees of freedom by solving Einstein's equations given a stress-energy tensor from N-body particles. It evolves particles along relativistic geodesics and can calculate power spectra of the metric perturbations. Future versions will implement light cone output for ray tracing simulations and modeling observed large-scale structure.
Authoring a personal GPT for your research and practice: How we created the Q...Leonel Morgado
Thematic analysis in qualitative research is a time-consuming and systematic task, typically done using teams. Team members must ground their activities on common understandings of the major concepts underlying the thematic analysis, and define criteria for its development. However, conceptual misunderstandings, equivocations, and lack of adherence to criteria are challenges to the quality and speed of this process. Given the distributed and uncertain nature of this process, we wondered if the tasks in thematic analysis could be supported by readily available artificial intelligence chatbots. Our early efforts point to potential benefits: not just saving time in the coding process but better adherence to criteria and grounding, by increasing triangulation between humans and artificial intelligence. This tutorial will provide a description and demonstration of the process we followed, as two academic researchers, to develop a custom ChatGPT to assist with qualitative coding in the thematic data analysis process of immersive learning accounts in a survey of the academic literature: QUAL-E Immersive Learning Thematic Analysis Helper. In the hands-on time, participants will try out QUAL-E and develop their ideas for their own qualitative coding ChatGPT. Participants that have the paid ChatGPT Plus subscription can create a draft of their assistants. The organizers will provide course materials and slide deck that participants will be able to utilize to continue development of their custom GPT. The paid subscription to ChatGPT Plus is not required to participate in this workshop, just for trying out personal GPTs during it.
The debris of the ‘last major merger’ is dynamically youngSérgio Sacani
The Milky Way’s (MW) inner stellar halo contains an [Fe/H]-rich component with highly eccentric orbits, often referred to as the
‘last major merger.’ Hypotheses for the origin of this component include Gaia-Sausage/Enceladus (GSE), where the progenitor
collided with the MW proto-disc 8–11 Gyr ago, and the Virgo Radial Merger (VRM), where the progenitor collided with the
MW disc within the last 3 Gyr. These two scenarios make different predictions about observable structure in local phase space,
because the morphology of debris depends on how long it has had to phase mix. The recently identified phase-space folds in Gaia
DR3 have positive caustic velocities, making them fundamentally different than the phase-mixed chevrons found in simulations
at late times. Roughly 20 per cent of the stars in the prograde local stellar halo are associated with the observed caustics. Based
on a simple phase-mixing model, the observed number of caustics are consistent with a merger that occurred 1–2 Gyr ago.
We also compare the observed phase-space distribution to FIRE-2 Latte simulations of GSE-like mergers, using a quantitative
measurement of phase mixing (2D causticality). The observed local phase-space distribution best matches the simulated data
1–2 Gyr after collision, and certainly not later than 3 Gyr. This is further evidence that the progenitor of the ‘last major merger’
did not collide with the MW proto-disc at early times, as is thought for the GSE, but instead collided with the MW disc within
the last few Gyr, consistent with the body of work surrounding the VRM.
Or: Beyond linear.
Abstract: Equivariant neural networks are neural networks that incorporate symmetries. The nonlinear activation functions in these networks result in interesting nonlinear equivariant maps between simple representations, and motivate the key player of this talk: piecewise linear representation theory.
Disclaimer: No one is perfect, so please mind that there might be mistakes and typos.
dtubbenhauer@gmail.com
Corrected slides: dtubbenhauer.com/talks.html
When I was asked to give a companion lecture in support of ‘The Philosophy of Science’ (https://shorturl.at/4pUXz) I decided not to walk through the detail of the many methodologies in order of use. Instead, I chose to employ a long standing, and ongoing, scientific development as an exemplar. And so, I chose the ever evolving story of Thermodynamics as a scientific investigation at its best.
Conducted over a period of >200 years, Thermodynamics R&D, and application, benefitted from the highest levels of professionalism, collaboration, and technical thoroughness. New layers of application, methodology, and practice were made possible by the progressive advance of technology. In turn, this has seen measurement and modelling accuracy continually improved at a micro and macro level.
Perhaps most importantly, Thermodynamics rapidly became a primary tool in the advance of applied science/engineering/technology, spanning micro-tech, to aerospace and cosmology. I can think of no better a story to illustrate the breadth of scientific methodologies and applications at their best.
EWOCS-I: The catalog of X-ray sources in Westerlund 1 from the Extended Weste...Sérgio Sacani
Context. With a mass exceeding several 104 M⊙ and a rich and dense population of massive stars, supermassive young star clusters
represent the most massive star-forming environment that is dominated by the feedback from massive stars and gravitational interactions
among stars.
Aims. In this paper we present the Extended Westerlund 1 and 2 Open Clusters Survey (EWOCS) project, which aims to investigate
the influence of the starburst environment on the formation of stars and planets, and on the evolution of both low and high mass stars.
The primary targets of this project are Westerlund 1 and 2, the closest supermassive star clusters to the Sun.
Methods. The project is based primarily on recent observations conducted with the Chandra and JWST observatories. Specifically,
the Chandra survey of Westerlund 1 consists of 36 new ACIS-I observations, nearly co-pointed, for a total exposure time of 1 Msec.
Additionally, we included 8 archival Chandra/ACIS-S observations. This paper presents the resulting catalog of X-ray sources within
and around Westerlund 1. Sources were detected by combining various existing methods, and photon extraction and source validation
were carried out using the ACIS-Extract software.
Results. The EWOCS X-ray catalog comprises 5963 validated sources out of the 9420 initially provided to ACIS-Extract, reaching a
photon flux threshold of approximately 2 × 10−8 photons cm−2
s
−1
. The X-ray sources exhibit a highly concentrated spatial distribution,
with 1075 sources located within the central 1 arcmin. We have successfully detected X-ray emissions from 126 out of the 166 known
massive stars of the cluster, and we have collected over 71 000 photons from the magnetar CXO J164710.20-455217.
The ability to recreate computational results with minimal effort and actionable metrics provides a solid foundation for scientific research and software development. When people can replicate an analysis at the touch of a button using open-source software, open data, and methods to assess and compare proposals, it significantly eases verification of results, engagement with a diverse range of contributors, and progress. However, we have yet to fully achieve this; there are still many sociotechnical frictions.
Inspired by David Donoho's vision, this talk aims to revisit the three crucial pillars of frictionless reproducibility (data sharing, code sharing, and competitive challenges) with the perspective of deep software variability.
Our observation is that multiple layers — hardware, operating systems, third-party libraries, software versions, input data, compile-time options, and parameters — are subject to variability that exacerbates frictions but is also essential for achieving robust, generalizable results and fostering innovation. I will first review the literature, providing evidence of how the complex variability interactions across these layers affect qualitative and quantitative software properties, thereby complicating the reproduction and replication of scientific studies in various fields.
I will then present some software engineering and AI techniques that can support the strategic exploration of variability spaces. These include the use of abstractions and models (e.g., feature models), sampling strategies (e.g., uniform, random), cost-effective measurements (e.g., incremental build of software configurations), and dimensionality reduction methods (e.g., transfer learning, feature selection, software debloating).
I will finally argue that deep variability is both the problem and solution of frictionless reproducibility, calling the software science community to develop new methods and tools to manage variability and foster reproducibility in software systems.
Exposé invité Journées Nationales du GDR GPL 2024
The technology uses reclaimed CO₂ as the dyeing medium in a closed loop process. When pressurized, CO₂ becomes supercritical (SC-CO₂). In this state CO₂ has a very high solvent power, allowing the dye to dissolve easily.
Phenomics assisted breeding in crop improvementIshaGoswami9
As the population is increasing and will reach about 9 billion upto 2050. Also due to climate change, it is difficult to meet the food requirement of such a large population. Facing the challenges presented by resource shortages, climate
change, and increasing global population, crop yield and quality need to be improved in a sustainable way over the coming decades. Genetic improvement by breeding is the best way to increase crop productivity. With the rapid progression of functional
genomics, an increasing number of crop genomes have been sequenced and dozens of genes influencing key agronomic traits have been identified. However, current genome sequence information has not been adequately exploited for understanding
the complex characteristics of multiple gene, owing to a lack of crop phenotypic data. Efficient, automatic, and accurate technologies and platforms that can capture phenotypic data that can
be linked to genomics information for crop improvement at all growth stages have become as important as genotyping. Thus,
high-throughput phenotyping has become the major bottleneck restricting crop breeding. Plant phenomics has been defined as the high-throughput, accurate acquisition and analysis of multi-dimensional phenotypes
during crop growing stages at the organism level, including the cell, tissue, organ, individual plant, plot, and field levels. With the rapid development of novel sensors, imaging technology,
and analysis methods, numerous infrastructure platforms have been developed for phenotyping.
Current Ms word generated power point presentation covers major details about the micronuclei test. It's significance and assays to conduct it. It is used to detect the micronuclei formation inside the cells of nearly every multicellular organism. It's formation takes place during chromosomal sepration at metaphase.
Remote Sensing and Computational, Evolutionary, Supercomputing, and Intellige...University of Maribor
Slides from talk:
Aleš Zamuda: Remote Sensing and Computational, Evolutionary, Supercomputing, and Intelligent Systems.
11th International Conference on Electrical, Electronics and Computer Engineering (IcETRAN), Niš, 3-6 June 2024
Inter-Society Networking Panel GRSS/MTT-S/CIS Panel Session: Promoting Connection and Cooperation
https://www.etran.rs/2024/en/home-english/
Remote Sensing and Computational, Evolutionary, Supercomputing, and Intellige...
Numerical Renormalization Group computation of magnetic relaxation rates
1. Numerical Renormalization-Group computation
of magnetic relaxation rates
Krissia de Zawadzki, Luiz Nunes de Oliveira, Jose Wilson M. Pinto
Instituto de F´ısica de S˜ao Carlos - Universidade de S˜ao Paulo
Zawadzki, K. de; Oliveira, L.N.; Pinto, J.W.M. NRG computation of nuclear magnetic relaxation rates 1 / 11
2. Introduction NRG calculations Numerical results Conclusions Acknowledgment
Radius of Kondo screening cloud
Radius of Kondo screening cloud
푅푘
LASZLO, B. PRB, 75 (2007). BOYCE, J.B; SLICHTER, C.P. PRL, 32, 61 (1974).
Zawadzki, K. de; Oliveira, L.N.; Pinto, J.W.M. NRG computation of nuclear magnetic relaxation rates 2 / 11
3. Introduction NRG calculations Numerical results Conclusions Acknowledgment
Radius of Kondo screening cloud
Radius of Kondo screening cloud
푅푘
푅퐾 ∝ 푇−1
퐾
General consensus
푅퐾 = ~푣퐹 /푘퐵푇퐾
Boyce
Slichter
NMR:
LASZLO, B. PRB, 75 (2007). BOYCE, J.B; SLICHTER, C.P. PRL, 32, 61 (1974).
Zawadzki, K. de; Oliveira, L.N.; Pinto, J.W.M. NRG computation of nuclear magnetic relaxation rates 2 / 11
4. Introduction NRG calculations Numerical results Conclusions Acknowledgment
Radius of Kondo screening cloud
Radius of Kondo screening cloud
푅푘
푅퐾 ∝ 푇−1
퐾
General consensus
푅퐾 = ~푣퐹 /푘퐵푇퐾
Boyce
Slichter
NMR:
LASZLO, B. PRB, 75 (2007). BOYCE, J.B; SLICHTER, C.P. PRL, 32, 61 (1974).
Zawadzki, K. de; Oliveira, L.N.; Pinto, J.W.M. NRG computation of nuclear magnetic relaxation rates 2 / 11
5. Introduction NRG calculations Numerical results Conclusions Acknowledgment
Radius of Kondo screening cloud
Radius of Kondo screening cloud
푅푘
푅퐾 ∝ 푇−1
퐾
General consensus
푅퐾 = ~푣퐹 /푘퐵푇퐾
Boyce
Slichter
NMR:
Experimental arrangement:
NMR probe: 푅 from the impurity
NRG computation of the spin
lattice relaxation rate 1/(푇1푇) as
function of 푇 and 푅
LASZLO, B. PRB, 75 (2007). BOYCE, J.B; SLICHTER, C.P. PRL, 32, 61 (1974).
Zawadzki, K. de; Oliveira, L.N.; Pinto, J.W.M. NRG computation of nuclear magnetic relaxation rates 2 / 11
6. Introduction NRG calculations Numerical results Conclusions Acknowledgment
Radius of Kondo screening cloud
Radius of Kondo screening cloud
푅푘
푅퐾 ∝ 푇−1
퐾
General consensus
푅퐾 = ~푣퐹 /푘퐵푇퐾
Boyce
Slichter
NMR:
Experimental arrangement:
NMR probe: 푅 from the impurity
NRG computation of the spin
lattice relaxation rate 1/(푇1푇) as
function of 푇 and 푅
Can we measure 푅퐾 via NMR?
Our
7. ndings:
Yes, we can!
T dependence changes as probe
crosses 푅퐾
Phase of low-푇 Friedel oscillations
also changes
LASZLO, B. PRB, 75 (2007). BOYCE, J.B; SLICHTER, C.P. PRL, 32, 61 (1974).
Zawadzki, K. de; Oliveira, L.N.; Pinto, J.W.M. NRG computation of nuclear magnetic relaxation rates 2 / 11
8. Introduction NRG calculations Numerical results Conclusions Acknowledgment
The quantum system
NRG Probe
Single-impurity Anderson model
퐻 =
퐻푐표푛푑 ⏞Σ︁ ⏟
k
휀k푐†
k푐k
휀 = 푣퐹
퐷 (푘 − 푘퐹 )
+퐷
−퐷
푘퐹
Zawadzki, K. de; Oliveira, L.N.; Pinto, J.W.M. NRG computation of nuclear magnetic relaxation rates 3 / 11
9. Introduction NRG calculations Numerical results Conclusions Acknowledgment
The quantum system
NRG Probe
Single-impurity Anderson model
퐻 =
퐻푐표푛푑 ⏞Σ︁ ⏟
k
휀k푐†
k푐k +
퐻푑 ⏞ ⏟
휀푑푐†
푑푐푑 + 푈푛푑↑푛푑↓
휀 = 푣퐹
퐷 (푘 − 푘퐹 )
+퐷
−퐷
푘퐹
Zawadzki, K. de; Oliveira, L.N.; Pinto, J.W.M. NRG computation of nuclear magnetic relaxation rates 3 / 11
10. Introduction NRG calculations Numerical results Conclusions Acknowledgment
The quantum system
NRG Probe
Single-impurity Anderson model
퐻 =
퐻푐표푛푑 ⏞Σ︁ ⏟
k
휀k푐†
k푐k +
퐻푑 ⏞ ⏟
휀푑푐†
푑푐푑 + 푈푛푑↑푛푑↓ +
퐻푖푛푡 ⏞√︂ ⏟
Γ
휋
(푓†
0 푐푑 + 퐻.푐.)
휀 = 푣퐹
퐷 (푘 − 푘퐹 )
+퐷
−퐷
푘퐹
푓0 =
1
√
휌
Σ︁
k
푐k
Zawadzki, K. de; Oliveira, L.N.; Pinto, J.W.M. NRG computation of nuclear magnetic relaxation rates 3 / 11
11. Introduction NRG calculations Numerical results Conclusions Acknowledgment
The quantum system
NRG Probe
Single-impurity Anderson model
퐻 =
퐻푐표푛푑 ⏞Σ︁ ⏟
k
휀k푐†
k푐k +
퐻푑 ⏞ ⏟
휀푑푐†
푑푐푑 + 푈푛푑↑푛푑↓ +
퐻푖푛푡 ⏞√︂ ⏟
Γ
휋
(푓†
0 푐푑 + 퐻.푐.)
퐻푝푟표푏푒 = −퐴
[︁
Ψ†
↑(⃗푅
)Ψ↓(⃗푅)퐼− + 퐻.푐.
]︁
Ψ휇 =
Σ︁
k
푒푖k.R푐k
1
푇1
=
4휋
~
Σ︁
퐼,퐹
푒−훽퐸퐼 |⟨퐼|퐻푝푟표푏푒|퐹⟩|2훿(퐸퐼 − 퐸퐹 )
푓0 =
1
√
휌
Σ︁
k
푐k
푅
Zawadzki, K. de; Oliveira, L.N.; Pinto, J.W.M. NRG computation of nuclear magnetic relaxation rates 3 / 11
12. Introduction NRG calculations Numerical results Conclusions Acknowledgment
Two-center basis
Two-center basis
Spherically symmetric operators
푐휀 =
Σ︁
k
푐 k 훿(휀 − 휀k) (around impurity)
푑휀 =
Σ︁
k
푐 k 푒푖k.R훿(휀 − 휀k) (around probe)
푐휀
푑휀
Zawadzki, K. de; Oliveira, L.N.; Pinto, J.W.M. NRG computation of nuclear magnetic relaxation rates 4 / 11