The document describes the Rietveld refinement method used in the GSAS-II software. It provides an overview of the history and development of Rietveld refinement by Hugo Rietveld. It then discusses the underlying linear and non-linear least squares theory used in Rietveld refinement, including the use of singular value decomposition to solve the normal equations. Finally, it covers specific aspects of the GSAS-II algorithm such as profile functions, constraints, and error estimates.
OPTIMIZED RATE ALLOCATION OF HYPERSPECTRAL IMAGES IN COMPRESSED DOMAIN USING ...Pioneer Natural Resources
This document discusses optimizing the rate allocation of hyperspectral images compressed using JPEG2000. It presents a mixed model for bit allocation that combines high and low bit rate models. This mixed model and an optimal rate allocation approach based on minimizing mean squared error under a rate constraint provide lower reconstruction errors than traditional approaches. Computational tests on hyperspectral data show the discrete wavelet transform allows for faster processing and less memory usage compared to the Karhunen-Loeve transform.
This document summarizes research on quantum chaos, including the principle of uniform semiclassical condensation of Wigner functions, spectral statistics in mixed systems, and dynamical localization of chaotic eigenstates. It discusses how in the semiclassical limit, Wigner functions condense uniformly on classical invariant components. For mixed systems, the spectrum can be seen as a superposition of regular and chaotic level sequences. Localization effects can be observed if the Heisenberg time is shorter than the classical diffusion time. The document presents an analytical formula called BRB that describes the transition between Poisson and random matrix statistics. An example is given of applying this to analyze the level spacing distribution for a billiard system.
This document presents results from a lattice QCD calculation of the proton isovector scalar charge (gs) at two light quark masses. The calculation uses domain-wall fermions and Iwasaki gauge actions on a 323x64 lattice with a spacing of 0.144 fm. Ratios of three-point to two-point correlation functions are formed and fit to a plateau to extract gs. Values of gs are obtained for quark masses of 0.0042 and 0.001, and all-mode averaging is used for the lighter mass. Chiral perturbation theory will be used to extrapolate gs to the physical quark mass. Preliminary results for gs at the unphysical quark masses are reported in lattice units.
2007 EuRad Conference: Speech on Rough Layers (ppt)Nicolas Pinel
The document presents a method for calculating the bistatic radar cross section (RCS) of two random rough layers in the high-frequency limit using the first-order Kirchhoff approximation. The method involves applying the Kirchhoff approximation separately to the upper and lower interfaces and iterating to account for multiple scattering between the interfaces. Analytical expressions for the second-order RCS are derived in 2D and 3D geometries. Numerical results show good agreement with a full-wave method and validate the model in the high-frequency limit.
2007 EuRad Conference: Speech on Rough Layers (odp)Nicolas Pinel
The document presents a method for calculating the bistatic radar cross section (RCS) of two random rough layers in the high-frequency limit using the first-order Kirchhoff approximation. The method involves applying the Kirchhoff approximation separately to the upper and lower interfaces and iterating to account for multiple scattering between the interfaces. Analytical expressions for the second-order RCS are derived in 2D and 3D geometries. Numerical results show good agreement with a full-wave method and validate the model in various scenarios. The method provides a fast way to model scattering from strongly rough double interfaces.
In this talk I will present real-time spectroscopy and different code to perform this kind of calculations.
This presentation can be download here:
http://www.attaccalite.com/wp-content/uploads/2022/03/RealTime_Lausanne_2022.odp
study Streaming Multigrid For Gradient Domain Operations On Large ImagesChiamin Hsu
The document describes a streaming multigrid solver for solving Poisson's equation on large images. It develops a multigrid method using a B-spline finite element basis that can efficiently process images in a streaming fashion using only a small window of image rows in memory at a time. The method achieves accurate solutions to Poisson's equation on gigapixel images in only 2 V-cycles by leveraging the temporal locality of the multigrid algorithm.
Design and Implementation of Variable Radius Sphere Decoding Algorithmcsandit
Sphere Decoding (SD) algorithm is an implement deco
ding algorithm based on Zero Forcing
(ZF) algorithm in the real number field. The classi
cal SD algorithm is famous for its
outstanding Bit Error Rate (BER) performance and de
coding strategy. The algorithm gets its
maximum likelihood solution by recursive shrinking
the searching radius gradually. However, it
is too complicated to use the method of shrinking t
he searching radius in ground
communication system. This paper proposed a Variabl
e Radius Sphere Decoding (VR-SD)
algorithm based on ZF algorithm in order to simplif
y the complex searching steps. We prove the
advantages of VR-SD algorithm by analyzing from the
derivation of mathematical formulas and
the simulation of the BER performance between SD an
d VR-SD algorithm.
OPTIMIZED RATE ALLOCATION OF HYPERSPECTRAL IMAGES IN COMPRESSED DOMAIN USING ...Pioneer Natural Resources
This document discusses optimizing the rate allocation of hyperspectral images compressed using JPEG2000. It presents a mixed model for bit allocation that combines high and low bit rate models. This mixed model and an optimal rate allocation approach based on minimizing mean squared error under a rate constraint provide lower reconstruction errors than traditional approaches. Computational tests on hyperspectral data show the discrete wavelet transform allows for faster processing and less memory usage compared to the Karhunen-Loeve transform.
This document summarizes research on quantum chaos, including the principle of uniform semiclassical condensation of Wigner functions, spectral statistics in mixed systems, and dynamical localization of chaotic eigenstates. It discusses how in the semiclassical limit, Wigner functions condense uniformly on classical invariant components. For mixed systems, the spectrum can be seen as a superposition of regular and chaotic level sequences. Localization effects can be observed if the Heisenberg time is shorter than the classical diffusion time. The document presents an analytical formula called BRB that describes the transition between Poisson and random matrix statistics. An example is given of applying this to analyze the level spacing distribution for a billiard system.
This document presents results from a lattice QCD calculation of the proton isovector scalar charge (gs) at two light quark masses. The calculation uses domain-wall fermions and Iwasaki gauge actions on a 323x64 lattice with a spacing of 0.144 fm. Ratios of three-point to two-point correlation functions are formed and fit to a plateau to extract gs. Values of gs are obtained for quark masses of 0.0042 and 0.001, and all-mode averaging is used for the lighter mass. Chiral perturbation theory will be used to extrapolate gs to the physical quark mass. Preliminary results for gs at the unphysical quark masses are reported in lattice units.
2007 EuRad Conference: Speech on Rough Layers (ppt)Nicolas Pinel
The document presents a method for calculating the bistatic radar cross section (RCS) of two random rough layers in the high-frequency limit using the first-order Kirchhoff approximation. The method involves applying the Kirchhoff approximation separately to the upper and lower interfaces and iterating to account for multiple scattering between the interfaces. Analytical expressions for the second-order RCS are derived in 2D and 3D geometries. Numerical results show good agreement with a full-wave method and validate the model in the high-frequency limit.
2007 EuRad Conference: Speech on Rough Layers (odp)Nicolas Pinel
The document presents a method for calculating the bistatic radar cross section (RCS) of two random rough layers in the high-frequency limit using the first-order Kirchhoff approximation. The method involves applying the Kirchhoff approximation separately to the upper and lower interfaces and iterating to account for multiple scattering between the interfaces. Analytical expressions for the second-order RCS are derived in 2D and 3D geometries. Numerical results show good agreement with a full-wave method and validate the model in various scenarios. The method provides a fast way to model scattering from strongly rough double interfaces.
In this talk I will present real-time spectroscopy and different code to perform this kind of calculations.
This presentation can be download here:
http://www.attaccalite.com/wp-content/uploads/2022/03/RealTime_Lausanne_2022.odp
study Streaming Multigrid For Gradient Domain Operations On Large ImagesChiamin Hsu
The document describes a streaming multigrid solver for solving Poisson's equation on large images. It develops a multigrid method using a B-spline finite element basis that can efficiently process images in a streaming fashion using only a small window of image rows in memory at a time. The method achieves accurate solutions to Poisson's equation on gigapixel images in only 2 V-cycles by leveraging the temporal locality of the multigrid algorithm.
Design and Implementation of Variable Radius Sphere Decoding Algorithmcsandit
Sphere Decoding (SD) algorithm is an implement deco
ding algorithm based on Zero Forcing
(ZF) algorithm in the real number field. The classi
cal SD algorithm is famous for its
outstanding Bit Error Rate (BER) performance and de
coding strategy. The algorithm gets its
maximum likelihood solution by recursive shrinking
the searching radius gradually. However, it
is too complicated to use the method of shrinking t
he searching radius in ground
communication system. This paper proposed a Variabl
e Radius Sphere Decoding (VR-SD)
algorithm based on ZF algorithm in order to simplif
y the complex searching steps. We prove the
advantages of VR-SD algorithm by analyzing from the
derivation of mathematical formulas and
the simulation of the BER performance between SD an
d VR-SD algorithm.
Here are the key changes to the noise analysis of the common source amplifier example if:
1) Transistor is in triode region:
- gm would be a function of VGS and VDS instead of a constant
- inMOS would depend on gm as a function of voltages
2) Include flicker noise:
- Add a flicker noise current source inMOS,f
- PSD of inMOS,f is Kf/f instead of constant
- Integrate PSD from fmin to fmax
3) Replace RL with PMOS load:
- Replace RL with the output resistance ro of the PMOS load transistor
- Add thermal noise current source of the PMOS load transistor
Newton's method and Gauss-Newton method can be used to minimize a nonlinear least squares function to fit a vector of model parameters to a data vector. The Gauss-Newton method approximates the Hessian matrix as the Jacobian transpose times the Jacobian, ignoring additional terms, making it faster to compute but less accurate than Newton's method. The Levenberg-Marquardt method interpolates between Gauss-Newton and steepest descent methods to provide a balance of convergence speed and accuracy. Iterative methods like conjugate gradients are useful for large nonlinear problems where storing and inverting the full matrix would be prohibitive. L1 regression provides a more robust alternative to L2 regression for dealing with outliers through minimization of the absolute error rather
Alexei Starobinsky - Inflation: the present statusSEENET-MTP
This document summarizes a presentation on inflation and the present status of inflationary cosmology. It discusses the key epochs in the early universe, including inflation, and how inflation solved issues with prior models. Observational evidence for inflation is presented, including measurements of the primordial power spectrum and constraints on the tensor-to-scalar ratio. Simple single-field inflation models are shown to match observations. The document also discusses the generation of primordial perturbations from quantum fluctuations during inflation and how this provides the seeds for structure formation.
Recent developments in the field of reduced order modeling - and in particular, active subspace construction - have made it possible to efficiently approximate complex models by constructing low-order response surfaces based upon a small subspace of the original high dimensional parameter space. These methods rely upon the fact that the response tends to vary more prominently in a few dominant directions defined by linear combinations of the original inputs, allowing for a rotation of the coordinate axis and a consequent transformation of the parameters. In this talk, we discuss a gradient free active subspace algorithm that is feasible for high dimensional parameter spaces where finite-difference techniques are impractical. We illustrate an initialized gradient-free active subspace algorithm for a neutronics example implemented with SCALE6.1.
The document summarizes early searches for supersymmetry using data from the LHC with an integrated luminosity of 70 nb-1. No significant deviations from the standard model predictions were observed. This places initial constraints on simplified models of new physics but more data will be needed to thoroughly probe the parameter space of popular theories like mSUGRA. The use of simplified models was discussed as a way to explore signatures of new physics without relying on specific ultraviolet completions.
This presentation discusses using transformation optics and finite-difference time-domain (FDTD) simulations to design metamaterials that manipulate light in desired ways. It begins with an overview of transformation optics and how material parameters can be derived to effect a spatial transformation on light rays. An example of a "beam turner" is presented, along with the calculations to determine the required inhomogeneous, bi-anisotropic material properties. The presentation then discusses using FDTD simulations to model light propagation through these designed materials by discretizing Maxwell's equations in space and time. Examples shown include simulations of the beam turner and cloaking devices.
This document provides an overview of higher dimensional theories and their connection to lower dimensional theories through renormalization group flows and fixed points. It discusses the Banks-Zaks fixed point of QCD specifically. Perturbative fixed points in higher dimensions are connected to non-perturbative fixed points in lower dimensions, allowing access to these non-perturbative regimes. As an example, it examines the O(N)×O(m) Landau-Ginzburg-Wilson model in 6-2 dimensions and calculates critical exponents to show universality between this theory and the 4D model. It also shows that critical exponents in QCD are renormalization scheme independent by calculating them to high loops in different schemes.
Marija Dimitrijević Ćirić "Matter Fields in SO(2,3)⋆ Model of Noncommutative ...SEENET-MTP
This document summarizes a talk given at a workshop on field theory and the early universe. The talk discussed a model of noncommutative gravity based on an SO(2,3) gauge theory. Key points:
1) The model treats gravity as an SO(2,3) gauge theory that is spontaneously broken to SO(1,3), relating it to general relativity. An action is constructed and expanded to obtain corrections from noncommutativity.
2) Adding matter fields like spinors and U(1) gauge fields yields modified actions and propagators with corrections depending on the noncommutativity tensor.
3) As an example, the noncommutative Landau problem is solved, giving
This document summarizes the MATLAB Reservoir Simulation Toolbox (MRST), which provides an environment for reservoir modelling and simulation using MATLAB. MRST features fully unstructured grids, rapid prototyping capabilities through automatic differentiation and object-oriented design, and industry-standard simulation methods. It has a large international user base in both academia and industry and consists of over 50 modules and thousands of lines of code.
This document discusses transmission line models used in power system analysis. It begins with an overview of the distributed parameter model that represents an infinitesimal length of transmission line using series impedance and shunt admittance. It then derives the telegrapher's equations and uses them to develop a single second-order differential equation for the voltage along the line. The document presents solutions for this equation that allow determining the voltage and current at any point on the line given conditions at one end. It introduces transmission line parameters including characteristic impedance and propagation constant and shows how they relate the sending and receiving end quantities. Equivalent lumped-parameter π models are also derived in terms of the transmission line parameters. Finally, it discusses short
The document describes experiments to simulate and analyze second order systems in time domain. It discusses designing a second order RLC circuit with different damping ratios ξ and applying a unit step input. The time domain specifications like percentage overshoot, peak time, rise time and settling time are calculated theoretically and also measured experimentally for different damping cases. Another experiment aims to design a passive RC lead compensator network for a specified phase lead and verify its performance using Bode plots. A third experiment analyzes steady state error of type-0, type-1 and type-2 digital control systems using MATLAB. A fourth experiment discusses simulating position control of an armature controlled DC motor in state space. The last experiment discusses designing a digital controller with
The document discusses using relativistic many-body theories like DF, CASCI, CASPT2 and RASCI to calculate parity-time (PT) odd effects in polar molecules like YbF and BaF. It summarizes previous work calculating spectroscopic constants of YbF using various methods and outlines future work to calculate electron electric dipole moment (EDM) and hyperfine constants using the above methods in other molecules.
This document provides an outline and introduction to the course "ELEG 867 - Compressive Sensing and Sparse Signal Representations" taught by Gonzalo R. Arce at the University of Delaware in Fall 2011. The course covers topics including vector spaces, the Nyquist-Shannon sampling theorem, sparsity, sparse signal representation, and compressive sensing. It discusses how compressive sensing allows reconstructing signals from far fewer samples than required by the Nyquist-Shannon theorem when the signals are sparse or compressible in some domain.
Practical Spherical Harmonics Based PRT MethodsNaughty Dog
The document summarizes methods for compressing precomputed radiance transfer (PRT) coefficients using spherical harmonics. It presents 4 methods with progressively higher compression ratios: Method 1 uses 9 bytes by removing a factor and scaling, Method 2 uses 6 bytes with a bit field allocation, Method 3 uses 6 bytes with a Lloyd-Max non-uniform quantizer, and Method 4 achieves 4 bytes with a different bit allocation. The methods are evaluated based on storage size, reconstruction quality, and rendering performance.
The document describes the calibration of the calorimeter for the DVCS experiment in Hall A using π0 events. It discusses two methods for calibration: 1) elastic calibration using ep→e'p' events and 2) π0 calibration using ep→e'p'π0→e'p'γγ events. The π0 calibration involves minimizing the difference between theoretical and measured photon energies using calibration coefficients. Iterating the process allows the coefficients to converge. Applying the calibrated coefficients to data groups improves the agreement of the invariant mass and missing mass distributions with expected values. The calibration is effective and will be continued with more data.
The document summarizes the heavy-ion physics program using the Large Hadron Collider (LHC) detectors. It discusses probing novel regimes of high density saturated gluon distributions and qualitatively new physics. Key observables include jet quenching, quarkonia suppression, and heavy flavor modification to study the quark-gluon plasma produced in Pb-Pb collisions. The ALICE, ATLAS and CMS experiments are well-suited to measure bulk properties and select hard probes over a wide momentum range.
This document summarizes a presentation about reconstructing inflationary models in modified f(R) gravity. It discusses the current status of inflation based on Planck data, reviews how inflation works in f(R) gravity, and describes two approaches - the direct approach of comparing models to data and the inverse approach of smoothly reconstructing models from observational quantities like the scalar spectrum index. A key model discussed is the simple R + R^2 model that can match current measurements of the spectral index and tensor-to-scalar ratio.
Alexei Starobinsky "New results on inflation and pre-inflation in modified gr...SEENET-MTP
1. The document summarizes new results on inflation and pre-inflation in modified gravity theories.
2. It discusses the simplest inflationary models in f(R) gravity, where f(R) is a function of the Ricci scalar R. Models with f(R) proportional to R2 can produce inflation.
3. Perturbation spectra during inflation are calculated for slow-roll f(R) models where f(R) is proportional to R2 times a slowly varying function. Scalar and tensor spectra depend on the Ricci scalar and its derivatives evaluated when modes cross the Hubble radius.
The document summarizes recent results from WZ, Higgs, and top analyses for upcoming summer conferences. For WZ analyses, cross sections are being updated in the electron and muon channels and combined with other experiments. For top analyses, improvements are being made to measurements of the cross section using topological cuts, dilepton events, and b-tagging. The top mass is also being measured. For Higgs analyses, searches are ongoing in the W/ZH to electrons/muons channels with additional b-tagging, and in decay modes like H to gamma gamma and WW.
Here are the key changes to the noise analysis of the common source amplifier example if:
1) Transistor is in triode region:
- gm would be a function of VGS and VDS instead of a constant
- inMOS would depend on gm as a function of voltages
2) Include flicker noise:
- Add a flicker noise current source inMOS,f
- PSD of inMOS,f is Kf/f instead of constant
- Integrate PSD from fmin to fmax
3) Replace RL with PMOS load:
- Replace RL with the output resistance ro of the PMOS load transistor
- Add thermal noise current source of the PMOS load transistor
Newton's method and Gauss-Newton method can be used to minimize a nonlinear least squares function to fit a vector of model parameters to a data vector. The Gauss-Newton method approximates the Hessian matrix as the Jacobian transpose times the Jacobian, ignoring additional terms, making it faster to compute but less accurate than Newton's method. The Levenberg-Marquardt method interpolates between Gauss-Newton and steepest descent methods to provide a balance of convergence speed and accuracy. Iterative methods like conjugate gradients are useful for large nonlinear problems where storing and inverting the full matrix would be prohibitive. L1 regression provides a more robust alternative to L2 regression for dealing with outliers through minimization of the absolute error rather
Alexei Starobinsky - Inflation: the present statusSEENET-MTP
This document summarizes a presentation on inflation and the present status of inflationary cosmology. It discusses the key epochs in the early universe, including inflation, and how inflation solved issues with prior models. Observational evidence for inflation is presented, including measurements of the primordial power spectrum and constraints on the tensor-to-scalar ratio. Simple single-field inflation models are shown to match observations. The document also discusses the generation of primordial perturbations from quantum fluctuations during inflation and how this provides the seeds for structure formation.
Recent developments in the field of reduced order modeling - and in particular, active subspace construction - have made it possible to efficiently approximate complex models by constructing low-order response surfaces based upon a small subspace of the original high dimensional parameter space. These methods rely upon the fact that the response tends to vary more prominently in a few dominant directions defined by linear combinations of the original inputs, allowing for a rotation of the coordinate axis and a consequent transformation of the parameters. In this talk, we discuss a gradient free active subspace algorithm that is feasible for high dimensional parameter spaces where finite-difference techniques are impractical. We illustrate an initialized gradient-free active subspace algorithm for a neutronics example implemented with SCALE6.1.
The document summarizes early searches for supersymmetry using data from the LHC with an integrated luminosity of 70 nb-1. No significant deviations from the standard model predictions were observed. This places initial constraints on simplified models of new physics but more data will be needed to thoroughly probe the parameter space of popular theories like mSUGRA. The use of simplified models was discussed as a way to explore signatures of new physics without relying on specific ultraviolet completions.
This presentation discusses using transformation optics and finite-difference time-domain (FDTD) simulations to design metamaterials that manipulate light in desired ways. It begins with an overview of transformation optics and how material parameters can be derived to effect a spatial transformation on light rays. An example of a "beam turner" is presented, along with the calculations to determine the required inhomogeneous, bi-anisotropic material properties. The presentation then discusses using FDTD simulations to model light propagation through these designed materials by discretizing Maxwell's equations in space and time. Examples shown include simulations of the beam turner and cloaking devices.
This document provides an overview of higher dimensional theories and their connection to lower dimensional theories through renormalization group flows and fixed points. It discusses the Banks-Zaks fixed point of QCD specifically. Perturbative fixed points in higher dimensions are connected to non-perturbative fixed points in lower dimensions, allowing access to these non-perturbative regimes. As an example, it examines the O(N)×O(m) Landau-Ginzburg-Wilson model in 6-2 dimensions and calculates critical exponents to show universality between this theory and the 4D model. It also shows that critical exponents in QCD are renormalization scheme independent by calculating them to high loops in different schemes.
Marija Dimitrijević Ćirić "Matter Fields in SO(2,3)⋆ Model of Noncommutative ...SEENET-MTP
This document summarizes a talk given at a workshop on field theory and the early universe. The talk discussed a model of noncommutative gravity based on an SO(2,3) gauge theory. Key points:
1) The model treats gravity as an SO(2,3) gauge theory that is spontaneously broken to SO(1,3), relating it to general relativity. An action is constructed and expanded to obtain corrections from noncommutativity.
2) Adding matter fields like spinors and U(1) gauge fields yields modified actions and propagators with corrections depending on the noncommutativity tensor.
3) As an example, the noncommutative Landau problem is solved, giving
This document summarizes the MATLAB Reservoir Simulation Toolbox (MRST), which provides an environment for reservoir modelling and simulation using MATLAB. MRST features fully unstructured grids, rapid prototyping capabilities through automatic differentiation and object-oriented design, and industry-standard simulation methods. It has a large international user base in both academia and industry and consists of over 50 modules and thousands of lines of code.
This document discusses transmission line models used in power system analysis. It begins with an overview of the distributed parameter model that represents an infinitesimal length of transmission line using series impedance and shunt admittance. It then derives the telegrapher's equations and uses them to develop a single second-order differential equation for the voltage along the line. The document presents solutions for this equation that allow determining the voltage and current at any point on the line given conditions at one end. It introduces transmission line parameters including characteristic impedance and propagation constant and shows how they relate the sending and receiving end quantities. Equivalent lumped-parameter π models are also derived in terms of the transmission line parameters. Finally, it discusses short
The document describes experiments to simulate and analyze second order systems in time domain. It discusses designing a second order RLC circuit with different damping ratios ξ and applying a unit step input. The time domain specifications like percentage overshoot, peak time, rise time and settling time are calculated theoretically and also measured experimentally for different damping cases. Another experiment aims to design a passive RC lead compensator network for a specified phase lead and verify its performance using Bode plots. A third experiment analyzes steady state error of type-0, type-1 and type-2 digital control systems using MATLAB. A fourth experiment discusses simulating position control of an armature controlled DC motor in state space. The last experiment discusses designing a digital controller with
The document discusses using relativistic many-body theories like DF, CASCI, CASPT2 and RASCI to calculate parity-time (PT) odd effects in polar molecules like YbF and BaF. It summarizes previous work calculating spectroscopic constants of YbF using various methods and outlines future work to calculate electron electric dipole moment (EDM) and hyperfine constants using the above methods in other molecules.
This document provides an outline and introduction to the course "ELEG 867 - Compressive Sensing and Sparse Signal Representations" taught by Gonzalo R. Arce at the University of Delaware in Fall 2011. The course covers topics including vector spaces, the Nyquist-Shannon sampling theorem, sparsity, sparse signal representation, and compressive sensing. It discusses how compressive sensing allows reconstructing signals from far fewer samples than required by the Nyquist-Shannon theorem when the signals are sparse or compressible in some domain.
Practical Spherical Harmonics Based PRT MethodsNaughty Dog
The document summarizes methods for compressing precomputed radiance transfer (PRT) coefficients using spherical harmonics. It presents 4 methods with progressively higher compression ratios: Method 1 uses 9 bytes by removing a factor and scaling, Method 2 uses 6 bytes with a bit field allocation, Method 3 uses 6 bytes with a Lloyd-Max non-uniform quantizer, and Method 4 achieves 4 bytes with a different bit allocation. The methods are evaluated based on storage size, reconstruction quality, and rendering performance.
The document describes the calibration of the calorimeter for the DVCS experiment in Hall A using π0 events. It discusses two methods for calibration: 1) elastic calibration using ep→e'p' events and 2) π0 calibration using ep→e'p'π0→e'p'γγ events. The π0 calibration involves minimizing the difference between theoretical and measured photon energies using calibration coefficients. Iterating the process allows the coefficients to converge. Applying the calibrated coefficients to data groups improves the agreement of the invariant mass and missing mass distributions with expected values. The calibration is effective and will be continued with more data.
The document summarizes the heavy-ion physics program using the Large Hadron Collider (LHC) detectors. It discusses probing novel regimes of high density saturated gluon distributions and qualitatively new physics. Key observables include jet quenching, quarkonia suppression, and heavy flavor modification to study the quark-gluon plasma produced in Pb-Pb collisions. The ALICE, ATLAS and CMS experiments are well-suited to measure bulk properties and select hard probes over a wide momentum range.
This document summarizes a presentation about reconstructing inflationary models in modified f(R) gravity. It discusses the current status of inflation based on Planck data, reviews how inflation works in f(R) gravity, and describes two approaches - the direct approach of comparing models to data and the inverse approach of smoothly reconstructing models from observational quantities like the scalar spectrum index. A key model discussed is the simple R + R^2 model that can match current measurements of the spectral index and tensor-to-scalar ratio.
Alexei Starobinsky "New results on inflation and pre-inflation in modified gr...SEENET-MTP
1. The document summarizes new results on inflation and pre-inflation in modified gravity theories.
2. It discusses the simplest inflationary models in f(R) gravity, where f(R) is a function of the Ricci scalar R. Models with f(R) proportional to R2 can produce inflation.
3. Perturbation spectra during inflation are calculated for slow-roll f(R) models where f(R) is proportional to R2 times a slowly varying function. Scalar and tensor spectra depend on the Ricci scalar and its derivatives evaluated when modes cross the Hubble radius.
The document summarizes recent results from WZ, Higgs, and top analyses for upcoming summer conferences. For WZ analyses, cross sections are being updated in the electron and muon channels and combined with other experiments. For top analyses, improvements are being made to measurements of the cross section using topological cuts, dilepton events, and b-tagging. The top mass is also being measured. For Higgs analyses, searches are ongoing in the W/ZH to electrons/muons channels with additional b-tagging, and in decay modes like H to gamma gamma and WW.
Sistem inferensi fuzzy menggunakan kaidah-kaidah fuzzy untuk menarik kesimpulan dari input yang berupa nilai-nilai crisp. Prosesnya meliputi fuzzifikasi, operasi logika fuzzy, implikasi, agregasi, dan defuzzifikasi untuk menghasilkan output berupa nilai-nilai crisp juga. Terdapat dua metode utama, yaitu metode Mamdani dan metode Sugeno.
Backpropagation adalah metode pelatihan terawasi pada jaringan syaraf tiruan yang meminimalkan error output dengan memperbaharui bobot secara berulang hingga error terkecil. Metode ini melibatkan tahap feedforward, backpropagation of error, dan pembaruan bobot berdasarkan error tersebut.
Waves can interact with matter in several ways:
Reflection occurs when a wave bounces off a surface it cannot pass through. Refraction is when a wave bends as it passes from one medium into another at an angle, occurring because the wave slows down on one side compared to the other. Diffraction is the bending of a wave as it travels around or through an opening, with larger wavelengths diffracting more through smaller openings. Polarization occurs when unpolarized light is turned into polarized light that travels in one direction only, such as when light reflects off a flat surface. Scattering redirects light as it passes through a medium, causing the sky to appear blue and sunsets orange/red due to scattering
Dokumen tersebut memberikan ringkasan tentang konsep dasar listrik termasuk besaran-besaran listrik seperti tegangan, arus, daya, dan energi. Juga menjelaskan elemen pasif dan aktif serta tujuan pembelajaran mata kuliah elektro tentang analisis rangkaian listrik.
TIME DIVISION MULTIPLEXING TECHNIQUE FOR COMMUNICATION SYSTEMHODECEDSIET
Time Division Multiplexing (TDM) is a method of transmitting multiple signals over a single communication channel by dividing the signal into many segments, each having a very short duration of time. These time slots are then allocated to different data streams, allowing multiple signals to share the same transmission medium efficiently. TDM is widely used in telecommunications and data communication systems.
### How TDM Works
1. **Time Slots Allocation**: The core principle of TDM is to assign distinct time slots to each signal. During each time slot, the respective signal is transmitted, and then the process repeats cyclically. For example, if there are four signals to be transmitted, the TDM cycle will divide time into four slots, each assigned to one signal.
2. **Synchronization**: Synchronization is crucial in TDM systems to ensure that the signals are correctly aligned with their respective time slots. Both the transmitter and receiver must be synchronized to avoid any overlap or loss of data. This synchronization is typically maintained by a clock signal that ensures time slots are accurately aligned.
3. **Frame Structure**: TDM data is organized into frames, where each frame consists of a set of time slots. Each frame is repeated at regular intervals, ensuring continuous transmission of data streams. The frame structure helps in managing the data streams and maintaining the synchronization between the transmitter and receiver.
4. **Multiplexer and Demultiplexer**: At the transmitting end, a multiplexer combines multiple input signals into a single composite signal by assigning each signal to a specific time slot. At the receiving end, a demultiplexer separates the composite signal back into individual signals based on their respective time slots.
### Types of TDM
1. **Synchronous TDM**: In synchronous TDM, time slots are pre-assigned to each signal, regardless of whether the signal has data to transmit or not. This can lead to inefficiencies if some time slots remain empty due to the absence of data.
2. **Asynchronous TDM (or Statistical TDM)**: Asynchronous TDM addresses the inefficiencies of synchronous TDM by allocating time slots dynamically based on the presence of data. Time slots are assigned only when there is data to transmit, which optimizes the use of the communication channel.
### Applications of TDM
- **Telecommunications**: TDM is extensively used in telecommunication systems, such as in T1 and E1 lines, where multiple telephone calls are transmitted over a single line by assigning each call to a specific time slot.
- **Digital Audio and Video Broadcasting**: TDM is used in broadcasting systems to transmit multiple audio or video streams over a single channel, ensuring efficient use of bandwidth.
- **Computer Networks**: TDM is used in network protocols and systems to manage the transmission of data from multiple sources over a single network medium.
### Advantages of TDM
- **Efficient Use of Bandwidth**: TDM all
DEEP LEARNING FOR SMART GRID INTRUSION DETECTION: A HYBRID CNN-LSTM-BASED MODELgerogepatton
As digital technology becomes more deeply embedded in power systems, protecting the communication
networks of Smart Grids (SG) has emerged as a critical concern. Distributed Network Protocol 3 (DNP3)
represents a multi-tiered application layer protocol extensively utilized in Supervisory Control and Data
Acquisition (SCADA)-based smart grids to facilitate real-time data gathering and control functionalities.
Robust Intrusion Detection Systems (IDS) are necessary for early threat detection and mitigation because
of the interconnection of these networks, which makes them vulnerable to a variety of cyberattacks. To
solve this issue, this paper develops a hybrid Deep Learning (DL) model specifically designed for intrusion
detection in smart grids. The proposed approach is a combination of the Convolutional Neural Network
(CNN) and the Long-Short-Term Memory algorithms (LSTM). We employed a recent intrusion detection
dataset (DNP3), which focuses on unauthorized commands and Denial of Service (DoS) cyberattacks, to
train and test our model. The results of our experiments show that our CNN-LSTM method is much better
at finding smart grid intrusions than other deep learning algorithms used for classification. In addition,
our proposed approach improves accuracy, precision, recall, and F1 score, achieving a high detection
accuracy rate of 99.50%.
Introduction- e - waste – definition - sources of e-waste– hazardous substances in e-waste - effects of e-waste on environment and human health- need for e-waste management– e-waste handling rules - waste minimization techniques for managing e-waste – recycling of e-waste - disposal treatment methods of e- waste – mechanism of extraction of precious metal from leaching solution-global Scenario of E-waste – E-waste in India- case studies.
Advanced control scheme of doubly fed induction generator for wind turbine us...IJECEIAES
This paper describes a speed control device for generating electrical energy on an electricity network based on the doubly fed induction generator (DFIG) used for wind power conversion systems. At first, a double-fed induction generator model was constructed. A control law is formulated to govern the flow of energy between the stator of a DFIG and the energy network using three types of controllers: proportional integral (PI), sliding mode controller (SMC) and second order sliding mode controller (SOSMC). Their different results in terms of power reference tracking, reaction to unexpected speed fluctuations, sensitivity to perturbations, and resilience against machine parameter alterations are compared. MATLAB/Simulink was used to conduct the simulations for the preceding study. Multiple simulations have shown very satisfying results, and the investigations demonstrate the efficacy and power-enhancing capabilities of the suggested control system.
ACEP Magazine edition 4th launched on 05.06.2024Rahul
This document provides information about the third edition of the magazine "Sthapatya" published by the Association of Civil Engineers (Practicing) Aurangabad. It includes messages from current and past presidents of ACEP, memories and photos from past ACEP events, information on life time achievement awards given by ACEP, and a technical article on concrete maintenance, repairs and strengthening. The document highlights activities of ACEP and provides a technical educational article for members.
Understanding Inductive Bias in Machine LearningSUTEJAS
This presentation explores the concept of inductive bias in machine learning. It explains how algorithms come with built-in assumptions and preferences that guide the learning process. You'll learn about the different types of inductive bias and how they can impact the performance and generalizability of machine learning models.
The presentation also covers the positive and negative aspects of inductive bias, along with strategies for mitigating potential drawbacks. We'll explore examples of how bias manifests in algorithms like neural networks and decision trees.
By understanding inductive bias, you can gain valuable insights into how machine learning models work and make informed decisions when building and deploying them.
Presentation of IEEE Slovenia CIS (Computational Intelligence Society) Chapte...University of Maribor
Slides from talk presenting:
Aleš Zamuda: Presentation of IEEE Slovenia CIS (Computational Intelligence Society) Chapter and Networking.
Presentation at IcETRAN 2024 session:
"Inter-Society Networking Panel GRSS/MTT-S/CIS
Panel Session: Promoting Connection and Cooperation"
IEEE Slovenia GRSS
IEEE Serbia and Montenegro MTT-S
IEEE Slovenia CIS
11TH INTERNATIONAL CONFERENCE ON ELECTRICAL, ELECTRONIC AND COMPUTING ENGINEERING
3-6 June 2024, Niš, Serbia
A review on techniques and modelling methodologies used for checking electrom...nooriasukmaningtyas
The proper function of the integrated circuit (IC) in an inhibiting electromagnetic environment has always been a serious concern throughout the decades of revolution in the world of electronics, from disjunct devices to today’s integrated circuit technology, where billions of transistors are combined on a single chip. The automotive industry and smart vehicles in particular, are confronting design issues such as being prone to electromagnetic interference (EMI). Electronic control devices calculate incorrect outputs because of EMI and sensors give misleading values which can prove fatal in case of automotives. In this paper, the authors have non exhaustively tried to review research work concerned with the investigation of EMI in ICs and prediction of this EMI using various modelling methodologies and measurement setups.
Batteries -Introduction – Types of Batteries – discharging and charging of battery - characteristics of battery –battery rating- various tests on battery- – Primary battery: silver button cell- Secondary battery :Ni-Cd battery-modern battery: lithium ion battery-maintenance of batteries-choices of batteries for electric vehicle applications.
Fuel Cells: Introduction- importance and classification of fuel cells - description, principle, components, applications of fuel cells: H2-O2 fuel cell, alkaline fuel cell, molten carbonate fuel cell and direct methanol fuel cells.
Electric vehicle and photovoltaic advanced roles in enhancing the financial p...IJECEIAES
Climate change's impact on the planet forced the United Nations and governments to promote green energies and electric transportation. The deployments of photovoltaic (PV) and electric vehicle (EV) systems gained stronger momentum due to their numerous advantages over fossil fuel types. The advantages go beyond sustainability to reach financial support and stability. The work in this paper introduces the hybrid system between PV and EV to support industrial and commercial plants. This paper covers the theoretical framework of the proposed hybrid system including the required equation to complete the cost analysis when PV and EV are present. In addition, the proposed design diagram which sets the priorities and requirements of the system is presented. The proposed approach allows setup to advance their power stability, especially during power outages. The presented information supports researchers and plant owners to complete the necessary analysis while promoting the deployment of clean energy. The result of a case study that represents a dairy milk farmer supports the theoretical works and highlights its advanced benefits to existing plants. The short return on investment of the proposed approach supports the paper's novelty approach for the sustainable electrical system. In addition, the proposed system allows for an isolated power setup without the need for a transmission line which enhances the safety of the electrical network
2. HISTORY – H.M. RIETVELD
2
Hugo Rietveld; neutron powder diffractometer,
Petten, Netherlands
Papers: H.M. Rietveld, Acta Cryst. 22, 151-
2(1967)
H.M. Rietveld, J. App. Cryst., 2, 65-71 (1969)
Multi-parameter, nonlinear LS curve fitting
Exact overlaps
- symmetry
Incomplete overlaps
Io
SIc
2
2
o
c
o
wp
wI
)
I
w(I
R
Residuals:
Rietveld Minimize
2
)
( c
o
R I
I
w
M
Ic
c2 = MR/(n-p)
“chi-squared” or
“goodness-of-fit”
3. LINEAR LEAST SQUARES THEORY
3
and a function
then the best estimate of the values pi is found by
minimizing
This is done by setting the derivative to zero
Results in n “normal” equations (one for each variable) -
solve for pi
)
...,
,
,
( 3
2
1 n
calc p
p
p
p
f
I
Given a set of observations Iobs
2
)
( c
o I
I
w
M
0
)
(
j
c
c
o
p
I
I
I
w
4. NON-LINEAR LEAST SQUARES THEORY
4
Problem - I(pi) is nonlinear & transcendental (sin, cos, etc.)
so can’t solve directly
Expand I(pi) as Taylor series & toss high order terms
i
i
i
c
i
c
i
c p
p
I
a
I
p
I )
(
)
( ai - initial values of pi
pi = pi - ai (shift)
)
(
0 i
c
o
j
c
i
i
i
c
a
I
I
I
p
I
p
p
I
I
w
Outer sum over observations
Solve for pi - shifts of parameters, NOT values
Normal equations - one for each pi
5. LEAST SQUARES THEORY - CONTINUED
5
Rearrange
1
1
1 p
I
I
w
p
p
I
p
I
w c
i
n
i i
c
c
.
.
.
n
c
i
n
i i
c
n
c
p
I
I
w
p
p
I
p
I
w
1
Matrix form: Ax=v
i
c
i
j
j
j
c
i
c
j
i
p
I
I
w
v
p
x
p
I
p
I
w
a
)
(
,
Solve: x = A-1v = Bv; B = A-1 This gives set of Δpi to apply
to “old” set of ai; repeat until pi small.
6. GSAS-II ALGORITHM
GSAS – process point-by-point to make
Ic (value) & Ic/pi (vector)
Hessian matrix - A
/pi
I
c
/p
j
Ic/pi
Sw[Ic/pi]·[Ic/pj]
GSAS-II – process reflection-by-reflection
to make Ic (vector) & Ic/pi (Jacobian
matrix)
2 or TOF
2
or
TOF
p
i
pi
x
NB: GSAS-II – needs large memory!
Jacobian matrix - J wJTJ = A
Hessian matrix - A
7. REFINEMENT VIA MODIFIED
LEVENBERG/MARQUARDT-SVD ALGORITHM
Steps:
1. Compute
2. Normalize
3. compute c2(p)
4. Select l (=0.001, “damping factor”)
5. Modify
6. Make SVD inversion of A”
7. Solve for dp (unnormalized!) & compute c2(p+dp)
8. If c2(p+dp) > c2(p) then l*10 go to 5
9. Else apply dp to p & go to 1 (new cycle)
10.Quit when c2(p) - c2(p+dp) / c2(p) < 0.0001
SLOW step
FAST steps
NB: all in ~40 lines of python; all double precision
NB2: this thing is exceedingly robust – no user damping factors needed
8. SVD – SINGULAR VALUE DECOMPOSITION
Singularities & near singularities – see Mathematical Recipes 2.9
8
LS matrix: solve for x Ax=b by x=A-1b; x are the parameter shifts
SVD: replace A = UwV where U & V are such that U-1 = UT & V-1 = VT
& w – diagonal matrix; all same size as A
Then: A-1 = V(1/wii)UT
The trick: what to do if wii ~ 0? (singularity) make 1/wii = 0! (instead of ꝏ)
Then: x = V(1/wii)UTb does away with ill-conditioned terms
Have to choose tolerance on wii ~0 (typically 10-6 but 10-3 for proteins works well)
SVD is in python library as numpy.linalg.svd
& uses LAPACK _gesdd routine (fortran – code in MR 2.9)
NB: all double precision in python; downside is wii not 1:1 to parameters so id of
failures difficult.
9. LEAST SQUARES ALGORITHMS IN GSAS-II
Useful choices – found in Controls
9
Analytic Hessian – default Levenberg-Marquardt SVD from Hessian & computed derivatives
Downside: hard singularities hard to find “linear algebra errors” cause failures
Analytic Jacobian – uses Jacobian matrix (not Hessian) no SVD; identifies singularities &
Removes them from LS refinement; always runs to convergence
Hessian SVD – no Levenberg-Marquardt (might be better for single crystal data)
Same downside as Analytic Hessian
Numeric – no derivatives & slow – mostly for testing purposes.
10. LEAST SQUARES THEORY - CONTINUED
10
Error estimates (mostly from W.C. Hamilton)
Given observations n > m parameters
with distributions that have finite 2nd moments
(no need to be “normal” although usually are for powders)
Then LS gives parameter estimates (shifts in our case)
with the minimum variance in any linear combination
The error estimates (“esd’s”) are
m
n
I
I
w
b c
o
ii
i
2
2
2
)
(
c
c
bii - diagonal elements of the inverted A matrix
Note: There is little justification for additional scaling of
the i NB: systematic errors will bias results
beyond i.
11. RIETVELD MODEL: IC = II{SKPF2
PMPLPP(P) + IB}
11
Ii - incident intensity - variable for fixed 2 (e.g. neutron TOF)
kp - scale factor for particular phase
F2
p - structure factor for particular reflection
mp - reflection multiplicity
Lp - correction factors on intensity - texture, etc.
P(p) - peak shape function - size & microstrain, etc.
Sum over all reflections under a profile point (multiple
phases)
Ib – background function
More complex model than for single crystal diffraction
12. PROFILE FUNCTIONS P(P) – BASICS
12
Gaussian profile - generally instrumental origin
Lorentzian profile - largely sample effect
2
2
2
)
(
2
ln
4
exp
2
ln
4
)
,
(
T
T
G
p = Treflection-Tprofile (T = 2 or TOF)
2
2
1
1
2
)
,
(
T
T
L
Voigt – convolution = G L
Pseudo-Voigt – linear combination = hL+(1-h)G
h via Thompson, Cox & Hastings – pseudoVoigt = Voigt
CW Asymmetry from axial divergence – Finger, Cox & Jephcoat
NB: in gsas & GSAS-II, T is 2 in centideg or TOF in ms
13. Small (<1mm) crystals not d-functions
Size distribution
superposition of sharp to broad spots
Shape ~Lorentzian
Width d* = constant = d/d2 = cot/d
Bragg’s Law: 2 = ld/d2cos (= X/cos)
Scherrer equation
k=1,p = size
a*
b*
𝑆 =
180𝑘𝜆
𝜋𝑝 cos 𝛩
Isotropic Crystallite size & mstrain broadening
SAMPLE BROADENING
a*
b*
Size
mstrain
Unit cell variation (defects??)
Lorentzian distribution shape
d/d = constant = d*/d*=cot
Or: 2 = 2dtan/d (= Ytan)
m – mstrain (x106) parameter
𝑀 = 180𝜇 tan Θ/𝜋
14. CW PROFILE COEFFICENTS
Size: mstrain:
Need: S (Gauss) & S (Lorentzian) sample broadening (2 slides back)
Mixing coeff for each; ms & mm (NB: called ‘mx’ in GSAS-II; range 0-1)
Normally ms & mm = 1 (all Lorentzian sample broadening) so:
S = S + M
S = 0 (no Gaussian sample broadening)
X,Y,Z = 0 (no Lorentzian instrument broadening)
Lorentzian vs Gaussian sample broadening?
14
𝑆 =
180𝑘𝜆
𝜋𝑝 cos 𝛩
𝑀 = 180𝜇 tan Θ/𝜋
𝑔
2
= 8𝑙𝑛2(𝑈𝑡𝑎𝑛2Θ + 𝑉𝑡𝑎𝑛Θ + 𝑊 + 𝑆Γ)
𝛾 =
𝑋
𝑐𝑜𝑠Θ
+ 𝑌𝑡𝑎𝑛Θ + 𝑍 + 𝑆𝛾
𝑆𝛾 = 𝑚𝑠𝑆 + 𝑚𝜇𝑀
𝑆Γ = [ 1 − 𝑚𝑠
2
𝑆2
+ 1 − 𝑚𝜇
2
𝑀2
]/8𝑙𝑛2
15. CW PROFILE PEAK BROADENING IN GSAS-II
The split of sample broadening from instrumental contribution
15
Instrument – fixed from calibration
Sample – phase & histogram dependent
Refined & constrained as needed
NB: for APS 11BM X,Y & Z = 0
Sample:
New NIST SRMS
640f & 676b
16. TOF PROFILE FUNCTION IN GSAS-II
The best of gsas fxns 1, 3, 4 & 5 combined (2 is not implemented)
16
eat e-bt (1-h)G(T,)
hL(T,)
) ) ) )
) )
q
q
p
p
N
z
y
N
T
H v
u
1
1 E
exp
Im
E
exp
Im
η
2
erfc
e
erfc
e
η
1
Convolution of paired exponentials and a pseudoVoigt
N, p, q, u, v, x & y functions of a, b, &
Empirical relationships to d-spacing
𝛼 = Τ
∝0
𝑑; 𝛽 = 𝛽0 + ൗ
𝛽1
𝑑4 + ൗ
𝛽𝑞
𝑑2
𝜎2 = 𝑠0 + 𝑠1𝑑2 + 𝑠2𝑑4 + 𝑆𝑞𝑑 + 𝑆Γ
𝛾 = 𝑋𝑑 + 𝑌𝑑2
+ 𝑍 + 𝑆𝛾
Sample broadening terms
- earlier slide; may be hkl
dependent
Peak position
– not peak top
New terms for
epithermal effects
T = Cd+Ad2+B/d+Z
17. MIXING
Size: mstrain:
Need: S (Gauss) & S (Lorentzian) sample broadening (2 slides back)
Mixing coeff for each; ms & mm (NB: called ‘mx’ in GSAS-II; range 0-1)
Normally ms & mm = 1 (all Lorentzian sample broadening) so:
S = S + M
S = 0 (no Gaussian sample broadening)
X,Y,Z = 0 (no Lorentzian instrument broadening)
Lorentzian vs Gaussian sample broadening?
17
𝑆 =
180𝑘𝜆
𝜋𝑝 cos 𝛩
𝑀 = 180𝜇 tan Θ/𝜋
𝑔
2
= 8𝑙𝑛2(𝑈𝑡𝑎𝑛2Θ + 𝑉𝑡𝑎𝑛Θ + 𝑊 + 𝑆Γ)
𝛾 =
𝑋
𝑐𝑜𝑠Θ
+ 𝑌𝑡𝑎𝑛Θ + 𝑍 + 𝑆𝛾
𝑆𝛾 = 𝑚𝑠𝑆 + 𝑚𝜇𝑀
𝑆Γ = [ 1 − 𝑚𝑠
2
𝑆2
+ 1 − 𝑚𝜇
2
𝑀2
]/8𝑙𝑛2
18. 18
TOF PROFILE PEAK BROADENING IN GSAS-II
The split of sample broadening from instrumental contribution
Instrument – fixed from calibration Sample – phase & histogram dependent
Independent of experiment (e.g. CW or TOF)
Sample:
New NIST SRMS
640f & 676b
19. AXIAL BROADENING FUNCTION – CONST.
WAVELENGTH
19
Finger, Cox & Jephcoat based on van Laar & Yelon
2Bragg
2i
2min
Pseudo-Voigt
= profile function
Depend on slit & sample “heights” wrt diffr. radius
H/L & S/L - parameters in function; combined as H+S/L in GSAS-II
(typically 0.005 - 0.020)
Debye-Scherrer
cone
2 Scan
Slit
H
20. HOW GOOD IS THIS FUNCTION?
20
Protein Rietveld refinement - Very low angle fit
1.0-4.0° peaks - strong asymmetry
“perfect” fit to shape
21. PROFILE FUNCTION – COMPLEXITIES
AN EXAMPLE – UNUSUAL LINE BROADENING
21
Sharp lines
Broad lines
Seeming inconsistency in line broadening - hkl dependent
22. MICROSTRAIN BROADENING
– PHYSICAL MODEL
22
Stephens, P.W. (1999). J. Appl. Cryst. 32, 281-
289.
Also see Popa, N. (1998). J. Appl. Cryst. 31,
176-180.
Model – elastic deformation of crystallites
hk
hl
kl
l
k
h
M
d
hkl
hkl
6
5
4
2
3
2
2
2
1
2
1
a
a
a
a
a
a
d-spacing expression
)
j
i j
i
ij
hkl
M
M
C
M
,
2
a
a
Broadening – variance in Mhkl; refine Cij
24. INTENSITY EXTRACTION
Structure factors from powder patterns? structure solution
24
Apportion Io by ratios of Ic(H)
for contributing reflections
Sum over all under peak profile
Correct for multiplicity & Lp, etc.
Result is F2(H)
Here 4 reflections contribute
LeBail algorithm – extracted F2
o new F2
c then next cycle;
refine only background, peak shapes & positions – few parameters
No constraints needed for overlaps – Simple
Pawley refinement – F2
o are parameters
+ background, peak shapes & positions – many parameters
Constraints & restraints required for overlaps - Complex