The document summarizes key concepts and developments in seismic velocity analysis in transversely isotropic (TI) media over several decades, beginning with Thomsen's (1986) seminal work introducing parameters to characterize TI anisotropy. Subsequent work expanded on non-hyperbolic moveout (Tsvankin and Thomsen 1994), dipping reflectors (Tsvankin 1995), and velocity analysis techniques for TI media (Alkhalifah and Tsvankin 1995). Later contributions improved modeling of non-hyperbolic moveout using rational functions (Douma and Calvert 2006) and the generalized moveout approximation (Fomel and Stovas 2010). The document outlines the theoretical underpinnings of seismic analysis in anisot
This document outlines the key steps in a simple seismic data processing workflow, including: data initialization such as reformatting, geometry updates, and trace editing; amplitude processing; noise attenuation; deconvolution; multiple attenuation; velocity analysis and NMO; migration; stacking; and data makeup. Each processing step is briefly described and examples are provided of before and after visualizations. References and an opportunity for questions are provided at the end.
This document provides an overview of principles of seismic data processing. It discusses key concepts like seismic generation, data processing steps, velocity analysis, noise attenuation techniques, and common processing flows. The document is divided into multiple chapters that cover topics such as wave propagation, reflection coefficients, deconvolution, F-K transforms, and factors that affect seismic amplitudes. Specific noise types like swell noise are also explained and methods to attenuate them, such as using band-pass filters or amplitude/frequency filters, are described.
Role of Seismic Attributes in Petroleum Exploration_30May22.pptxNagaLakshmiVasa
The document discusses seismic attributes which are measurable properties of seismic data computed through mathematical manipulation to highlight geological features. It describes how seismic waves are reflected and refracted and how this seismic response is recorded. The key types of seismic attributes discussed are amplitude, phase, frequency and complex trace attributes. Specific amplitude attributes like RMS amplitude and sweetness are explained. The document also covers applications of seismic attributes like direct hydrocarbon indication and limitations. Spectral decomposition and AVO/AVA analysis are also summarized.
The document provides an overview of principles of seismic data interpretation. It discusses fundamentals of seismic acquisition and processing such as seismic response, phase, polarity, reflections, and resolution. It also covers topics like structural interpretation pitfalls, seismic interpretation workflows involving building databases and time-depth relationships, and structural styles. The document includes sections on depth conversion, subsurface mapping techniques, and different types of velocities.
Seismic data processing 13 stacking&migrationAmin khalil
1) Stacking involves correcting common midpoint (CMP) gathers for normal moveout (NMO) and then summing the traces to increase the signal-to-noise ratio. There are two types of stacking: horizontal and vertical.
2) While stacking improves signal-to-noise ratio, it averages over different incident angles and results in data only at zero offset.
3) Migration is needed to properly image dipping and irregular reflectors by removing wave phenomena like diffraction and properly locating reflections in the subsurface.
The document describes seismic interpretation workflows, including conventional and unconventional techniques. Conventional techniques involve horizon interpretations, fault picking, and tying seismic data to well logs to understand subsurface geology. Unconventional techniques analyze seismic attribute variations like amplitudes to identify hydrocarbon indicators. The workflow includes generating synthetics from well logs, interpreting horizons on seismic sections, identifying structures like faults and gas chimneys, and determining direct hydrocarbon indicators.
The document summarizes key concepts and developments in seismic velocity analysis in transversely isotropic (TI) media over several decades, beginning with Thomsen's (1986) seminal work introducing parameters to characterize TI anisotropy. Subsequent work expanded on non-hyperbolic moveout (Tsvankin and Thomsen 1994), dipping reflectors (Tsvankin 1995), and velocity analysis techniques for TI media (Alkhalifah and Tsvankin 1995). Later contributions improved modeling of non-hyperbolic moveout using rational functions (Douma and Calvert 2006) and the generalized moveout approximation (Fomel and Stovas 2010). The document outlines the theoretical underpinnings of seismic analysis in anisot
This document outlines the key steps in a simple seismic data processing workflow, including: data initialization such as reformatting, geometry updates, and trace editing; amplitude processing; noise attenuation; deconvolution; multiple attenuation; velocity analysis and NMO; migration; stacking; and data makeup. Each processing step is briefly described and examples are provided of before and after visualizations. References and an opportunity for questions are provided at the end.
This document provides an overview of principles of seismic data processing. It discusses key concepts like seismic generation, data processing steps, velocity analysis, noise attenuation techniques, and common processing flows. The document is divided into multiple chapters that cover topics such as wave propagation, reflection coefficients, deconvolution, F-K transforms, and factors that affect seismic amplitudes. Specific noise types like swell noise are also explained and methods to attenuate them, such as using band-pass filters or amplitude/frequency filters, are described.
Role of Seismic Attributes in Petroleum Exploration_30May22.pptxNagaLakshmiVasa
The document discusses seismic attributes which are measurable properties of seismic data computed through mathematical manipulation to highlight geological features. It describes how seismic waves are reflected and refracted and how this seismic response is recorded. The key types of seismic attributes discussed are amplitude, phase, frequency and complex trace attributes. Specific amplitude attributes like RMS amplitude and sweetness are explained. The document also covers applications of seismic attributes like direct hydrocarbon indication and limitations. Spectral decomposition and AVO/AVA analysis are also summarized.
The document provides an overview of principles of seismic data interpretation. It discusses fundamentals of seismic acquisition and processing such as seismic response, phase, polarity, reflections, and resolution. It also covers topics like structural interpretation pitfalls, seismic interpretation workflows involving building databases and time-depth relationships, and structural styles. The document includes sections on depth conversion, subsurface mapping techniques, and different types of velocities.
Seismic data processing 13 stacking&migrationAmin khalil
1) Stacking involves correcting common midpoint (CMP) gathers for normal moveout (NMO) and then summing the traces to increase the signal-to-noise ratio. There are two types of stacking: horizontal and vertical.
2) While stacking improves signal-to-noise ratio, it averages over different incident angles and results in data only at zero offset.
3) Migration is needed to properly image dipping and irregular reflectors by removing wave phenomena like diffraction and properly locating reflections in the subsurface.
The document describes seismic interpretation workflows, including conventional and unconventional techniques. Conventional techniques involve horizon interpretations, fault picking, and tying seismic data to well logs to understand subsurface geology. Unconventional techniques analyze seismic attribute variations like amplitudes to identify hydrocarbon indicators. The workflow includes generating synthetics from well logs, interpreting horizons on seismic sections, identifying structures like faults and gas chimneys, and determining direct hydrocarbon indicators.
Avo ppt (Amplitude Variation with Offset)Haseeb Ahmed
AVO/AVA can physically explain presence of hydrocarbon in the reservoirs and the thickness, porosity, density, velocity, lithology and fluid content of the reservoir of the rock can be estimated.
This document provides an introduction to seismic interpretation. It begins with an overview of seismic acquisition methods both onshore and offshore. It then discusses key concepts in seismic data such as common depth points, floating datum, two-way time, and the relationship between time and depth. The document also covers seismic resolution, reflection coefficients, and examples of calculating tuning thickness. Finally, it discusses important steps for seismic interpretation including checking the line scale and orientation and interpreting major reflectors and geometries.
This document discusses static correction in seismic data processing. It covers:
1) Static correction removes the effects of surface elevation changes and weathering layers on seismic data.
2) Examples are given of how water depth variations can induce pull-down of reflectors, though this does not represent real geology.
3) A figure from a research paper shows a seismic section with associated velocity information, geology, and an approximate static corrections diagram.
Filtering in seismic data processing? How filtering help to suppress noises. Haseeb Ahmed
To enhance the signal-Noise ratio different techniques are used to remove the noises.
Types of Seismic Filtering:
1- Frequency Filtering.
2- Inverse Filtering (Deconvolution).
3- Velocity Filtering.
The Value Proposition of 3D and 4D Marine Seismic DataTaylor Goss
An explanation of what 3D/4D Seismic is and why it is valuable for the Oil & Gas industry. How it helps to reduce risk in exploration, and helps to monitor the reservoir.
The document discusses parameters for designing 2D and 3D seismic surveys. It explains that survey design aims to achieve geophysical objectives cost-effectively within time constraints. Key factors in design include target depth, resolution needs, and noise levels. Parameters that can be set include fold, offsets, bin size, and record length. The design must satisfy criteria like resolving the target, avoiding interference, and allowing for processing steps. Proper parameter selection depends on the exploration problem and existing seismic data.
Seismic interpretation involves correlating seismic data features with geological elements to understand the subsurface. The goal is to map reservoirs, including depth, thickness, and properties. This involves processing data, well calibration, horizon and fault tracking, and attribute analysis. Direct hydrocarbon indicators on seismic can help identify potential reservoirs, but require validation with amplitude versus offset analysis due to limitations and need for a geological model.
Seismic refraction is a method that uses seismic waves, specifically P-waves, to investigate geological structures below the Earth's surface. There are two main types of elastic body waves: P-waves and S-waves. P-waves travel faster and are studied in simple seismic methods. When seismic waves encounter an interface between geological layers, the waves are reflected, refracted, and converted between P and S-waves. Refraction occurs when the velocity increases with depth, causing head waves that travel parallel to interfaces and are recorded by geophones on the surface. Snell's law governs refraction and describes how the refraction angle changes based on the velocity in each layer.
1) Seismic interpretation uses acoustic waves to image the subsurface by measuring the two-way travel time and amplitude of reflections. 2) A seismic source generates wavefronts that travel through the subsurface, reflecting or transmitting at interfaces between rock layers. 3) The amount of reflection depends on the relative difference in physical properties across interfaces, defined by reflection coefficients. Layers thinner than 1/4 the wavelength cannot be resolved individually.
1) Conventional semblance analysis assumes no amplitude variation with offset (AVO), which can cause issues for events with strong AVO or polarity reversals. 2) The document proposes generalized semblance methods that incorporate AVO by modeling events with both hyperbolic moveout and amplitude variation. 3) It compares traditional, AB, and AK semblance on synthetic data, finding AK semblance maintains good velocity resolution while handling AVO better than traditional semblance.
This document provides an overview of geophysical data analysis and seismic wave theory. It defines key terms like body waves, surface waves, reflection, refraction, and diffraction. Body waves include compressional P-waves and shear S-waves, while surface waves are Rayleigh and Love waves. Reflection, refraction, and diffraction occur when seismic waves encounter interfaces between layers with different velocities. Multiples and ghosts are examples of phenomena that can complicate seismic data analysis if not properly handled. The document aims to give theoretical background knowledge needed to understand seismic data.
1) Geophysics uses remote sensing to determine subsurface conditions by analyzing seismic and radar signals that travel through and reflect off underground materials.
2) There are four main modes of signal propagation: vertical reflection, wide angle reflection, critical refraction, and direct waves. Precisely measuring the travel times of these signals allows subsurface structures to be interpreted.
3) Reflection seismology analyzes reflected signals to determine depth to interfaces by relating travel time, distance between source and receiver, and velocity, while refraction seismology uses travel times of critically refracted signals to determine shallow subsurface velocity structure.
This document discusses seismic data processing workflows. It begins with an introduction and agenda. The general workflow includes reformatting, trace editing, geometry handling, amplitude recovery, noise attenuation through techniques like frequency and FK filtering, deconvolution, multiple removal, migration, velocity analysis, NMO correction, muting, stacking, and post-stack filtering and amplitude scaling to produce a final image for geological interpretation. The document emphasizes that the proper workflow selection depends on processing environment, targets, costs, and client preferences. It concludes with time for questions.
This document discusses reservoir geophysics and geology. It begins with an introduction to geophysics, noting that most rocks are opaque so geophysics uses physics to obtain "geophysical images" of the subsurface based on properties like density, magnetism, conductivity, and velocity. It discusses using natural fields like gravity and magnetics to measure subsurface variations at a regional scale. Later sections discuss seismic reflection methods, potential field applications in mapping geology, and benefits of 3D seismic over 2D in providing better geological models. The document provides an overview of key concepts in reservoir geophysics and geology.
The analysis of all of the significant processes that formed a basin and deformed its sedimentary fill from basin-scale processes (e.g., plate tectonics)
to centimeter-scale processes (e.g., fracturing)
Seismic waves are energy propagated through the earth by earthquakes or artificial sources. There are two types of body waves (P and S waves) that travel through the earth and surface waves (Rayleigh and Love waves) that travel along the earth's surface. Seismic wave velocities depend on the elastic properties and density of the earth materials and are used to determine subsurface layering and structures. Analysis of travel times and slopes of seismic wave arrivals on record sections allows calculation of subsurface velocities and reflection/refraction of waves at interfaces between subsurface layers.
This is for student of geophysics who want to know about basic of multi component seismic. For further detail or any query you can drop me mail, my mail id id bprasad461@gmail.com
2 d and 3d land seismic data acquisition and seismic data processingAli Mahroug
The seismic method has three important/principal applications
a. Delineation of near-surface geology for engineering studies, and coal and mineral
exploration within a depth of up to 1km: the seismic method applied to the near –
surface studies is known as engineering seismology.
b. Hydrocarbon exploration and development within a depth of up to 10 km: seismic
method applied to the exploration and development of oil and gas fields is known
as exploration seismology.
c. Investigation of the earth’s crustal structure within a depth of up to 100 km: the
seismic method applies to the crustal and earthquake studies is known as
earthquake seismology.
This document outlines a simple seismic data processing workflow. It begins with acquiring field data and updating the geometry. Next steps include trace editing, amplitude recovery, and noise attenuation. Velocity analysis and normal moveout correction are then applied. Deconvolution and multiple attenuation are performed before migration. Post-migration involves stacking, filtering and amplitude scaling to produce the final processed seismic section. The goal of seismic processing is to produce high quality seismic data for geological interpretation and hydrocarbon exploration.
This document provides information about an online presentation on the electrical resistivity method in applied geophysics and engineering geology. It includes details about the date, time, presenter, and link to join the Zoom meeting. The bulk of the document then discusses the background and principles of the electrical resistivity method, including different electrode configurations, modes of deployment like vertical electrical sounding and constant separation traversing, and factors that influence electrode selection. Tables provide data on resistivities of common rocks and minerals and geometric factors for different electrode arrays.
This document summarizes research on direct non-linear inversion of 1D acoustic media using the inverse scattering series. Key points:
- A method is derived for directly inverting 1D acoustic media with varying velocity and density, without requiring an estimate of properties above reflectors or assuming linear relationships between property changes and reflection data.
- Testing on a single reflector case showed improved estimates of property changes beyond the reflector compared to linear methods, for a wider range of angles.
- A special parameter related to velocity change was identified that has the correct sign in the linear inversion and is not affected by issues like "leakage" that complicate inversions.
Probing spin dynamics from the Mott insulating to the superfluid regime in a ...Arijit Sharma
This document summarizes an experiment probing spin dynamics in a dipolar atomic Bose gas across the Mott insulating to superfluid transition in an optical lattice. Three key findings are:
1) In the Mott regime, spin dynamics shows complex oscillations that are well described by a model of intersite dipole-dipole interactions.
2) In the superfluid regime, spin dynamics is exponential and agrees with simulations including contact and dipolar interactions.
3) In the intermediate regime, oscillations survive with reduced amplitude, challenging theoretical descriptions accounting for dipolar interactions, contact interactions, and superexchange mechanisms.
Avo ppt (Amplitude Variation with Offset)Haseeb Ahmed
AVO/AVA can physically explain presence of hydrocarbon in the reservoirs and the thickness, porosity, density, velocity, lithology and fluid content of the reservoir of the rock can be estimated.
This document provides an introduction to seismic interpretation. It begins with an overview of seismic acquisition methods both onshore and offshore. It then discusses key concepts in seismic data such as common depth points, floating datum, two-way time, and the relationship between time and depth. The document also covers seismic resolution, reflection coefficients, and examples of calculating tuning thickness. Finally, it discusses important steps for seismic interpretation including checking the line scale and orientation and interpreting major reflectors and geometries.
This document discusses static correction in seismic data processing. It covers:
1) Static correction removes the effects of surface elevation changes and weathering layers on seismic data.
2) Examples are given of how water depth variations can induce pull-down of reflectors, though this does not represent real geology.
3) A figure from a research paper shows a seismic section with associated velocity information, geology, and an approximate static corrections diagram.
Filtering in seismic data processing? How filtering help to suppress noises. Haseeb Ahmed
To enhance the signal-Noise ratio different techniques are used to remove the noises.
Types of Seismic Filtering:
1- Frequency Filtering.
2- Inverse Filtering (Deconvolution).
3- Velocity Filtering.
The Value Proposition of 3D and 4D Marine Seismic DataTaylor Goss
An explanation of what 3D/4D Seismic is and why it is valuable for the Oil & Gas industry. How it helps to reduce risk in exploration, and helps to monitor the reservoir.
The document discusses parameters for designing 2D and 3D seismic surveys. It explains that survey design aims to achieve geophysical objectives cost-effectively within time constraints. Key factors in design include target depth, resolution needs, and noise levels. Parameters that can be set include fold, offsets, bin size, and record length. The design must satisfy criteria like resolving the target, avoiding interference, and allowing for processing steps. Proper parameter selection depends on the exploration problem and existing seismic data.
Seismic interpretation involves correlating seismic data features with geological elements to understand the subsurface. The goal is to map reservoirs, including depth, thickness, and properties. This involves processing data, well calibration, horizon and fault tracking, and attribute analysis. Direct hydrocarbon indicators on seismic can help identify potential reservoirs, but require validation with amplitude versus offset analysis due to limitations and need for a geological model.
Seismic refraction is a method that uses seismic waves, specifically P-waves, to investigate geological structures below the Earth's surface. There are two main types of elastic body waves: P-waves and S-waves. P-waves travel faster and are studied in simple seismic methods. When seismic waves encounter an interface between geological layers, the waves are reflected, refracted, and converted between P and S-waves. Refraction occurs when the velocity increases with depth, causing head waves that travel parallel to interfaces and are recorded by geophones on the surface. Snell's law governs refraction and describes how the refraction angle changes based on the velocity in each layer.
1) Seismic interpretation uses acoustic waves to image the subsurface by measuring the two-way travel time and amplitude of reflections. 2) A seismic source generates wavefronts that travel through the subsurface, reflecting or transmitting at interfaces between rock layers. 3) The amount of reflection depends on the relative difference in physical properties across interfaces, defined by reflection coefficients. Layers thinner than 1/4 the wavelength cannot be resolved individually.
1) Conventional semblance analysis assumes no amplitude variation with offset (AVO), which can cause issues for events with strong AVO or polarity reversals. 2) The document proposes generalized semblance methods that incorporate AVO by modeling events with both hyperbolic moveout and amplitude variation. 3) It compares traditional, AB, and AK semblance on synthetic data, finding AK semblance maintains good velocity resolution while handling AVO better than traditional semblance.
This document provides an overview of geophysical data analysis and seismic wave theory. It defines key terms like body waves, surface waves, reflection, refraction, and diffraction. Body waves include compressional P-waves and shear S-waves, while surface waves are Rayleigh and Love waves. Reflection, refraction, and diffraction occur when seismic waves encounter interfaces between layers with different velocities. Multiples and ghosts are examples of phenomena that can complicate seismic data analysis if not properly handled. The document aims to give theoretical background knowledge needed to understand seismic data.
1) Geophysics uses remote sensing to determine subsurface conditions by analyzing seismic and radar signals that travel through and reflect off underground materials.
2) There are four main modes of signal propagation: vertical reflection, wide angle reflection, critical refraction, and direct waves. Precisely measuring the travel times of these signals allows subsurface structures to be interpreted.
3) Reflection seismology analyzes reflected signals to determine depth to interfaces by relating travel time, distance between source and receiver, and velocity, while refraction seismology uses travel times of critically refracted signals to determine shallow subsurface velocity structure.
This document discusses seismic data processing workflows. It begins with an introduction and agenda. The general workflow includes reformatting, trace editing, geometry handling, amplitude recovery, noise attenuation through techniques like frequency and FK filtering, deconvolution, multiple removal, migration, velocity analysis, NMO correction, muting, stacking, and post-stack filtering and amplitude scaling to produce a final image for geological interpretation. The document emphasizes that the proper workflow selection depends on processing environment, targets, costs, and client preferences. It concludes with time for questions.
This document discusses reservoir geophysics and geology. It begins with an introduction to geophysics, noting that most rocks are opaque so geophysics uses physics to obtain "geophysical images" of the subsurface based on properties like density, magnetism, conductivity, and velocity. It discusses using natural fields like gravity and magnetics to measure subsurface variations at a regional scale. Later sections discuss seismic reflection methods, potential field applications in mapping geology, and benefits of 3D seismic over 2D in providing better geological models. The document provides an overview of key concepts in reservoir geophysics and geology.
The analysis of all of the significant processes that formed a basin and deformed its sedimentary fill from basin-scale processes (e.g., plate tectonics)
to centimeter-scale processes (e.g., fracturing)
Seismic waves are energy propagated through the earth by earthquakes or artificial sources. There are two types of body waves (P and S waves) that travel through the earth and surface waves (Rayleigh and Love waves) that travel along the earth's surface. Seismic wave velocities depend on the elastic properties and density of the earth materials and are used to determine subsurface layering and structures. Analysis of travel times and slopes of seismic wave arrivals on record sections allows calculation of subsurface velocities and reflection/refraction of waves at interfaces between subsurface layers.
This is for student of geophysics who want to know about basic of multi component seismic. For further detail or any query you can drop me mail, my mail id id bprasad461@gmail.com
2 d and 3d land seismic data acquisition and seismic data processingAli Mahroug
The seismic method has three important/principal applications
a. Delineation of near-surface geology for engineering studies, and coal and mineral
exploration within a depth of up to 1km: the seismic method applied to the near –
surface studies is known as engineering seismology.
b. Hydrocarbon exploration and development within a depth of up to 10 km: seismic
method applied to the exploration and development of oil and gas fields is known
as exploration seismology.
c. Investigation of the earth’s crustal structure within a depth of up to 100 km: the
seismic method applies to the crustal and earthquake studies is known as
earthquake seismology.
This document outlines a simple seismic data processing workflow. It begins with acquiring field data and updating the geometry. Next steps include trace editing, amplitude recovery, and noise attenuation. Velocity analysis and normal moveout correction are then applied. Deconvolution and multiple attenuation are performed before migration. Post-migration involves stacking, filtering and amplitude scaling to produce the final processed seismic section. The goal of seismic processing is to produce high quality seismic data for geological interpretation and hydrocarbon exploration.
This document provides information about an online presentation on the electrical resistivity method in applied geophysics and engineering geology. It includes details about the date, time, presenter, and link to join the Zoom meeting. The bulk of the document then discusses the background and principles of the electrical resistivity method, including different electrode configurations, modes of deployment like vertical electrical sounding and constant separation traversing, and factors that influence electrode selection. Tables provide data on resistivities of common rocks and minerals and geometric factors for different electrode arrays.
This document summarizes research on direct non-linear inversion of 1D acoustic media using the inverse scattering series. Key points:
- A method is derived for directly inverting 1D acoustic media with varying velocity and density, without requiring an estimate of properties above reflectors or assuming linear relationships between property changes and reflection data.
- Testing on a single reflector case showed improved estimates of property changes beyond the reflector compared to linear methods, for a wider range of angles.
- A special parameter related to velocity change was identified that has the correct sign in the linear inversion and is not affected by issues like "leakage" that complicate inversions.
Probing spin dynamics from the Mott insulating to the superfluid regime in a ...Arijit Sharma
This document summarizes an experiment probing spin dynamics in a dipolar atomic Bose gas across the Mott insulating to superfluid transition in an optical lattice. Three key findings are:
1) In the Mott regime, spin dynamics shows complex oscillations that are well described by a model of intersite dipole-dipole interactions.
2) In the superfluid regime, spin dynamics is exponential and agrees with simulations including contact and dipolar interactions.
3) In the intermediate regime, oscillations survive with reduced amplitude, challenging theoretical descriptions accounting for dipolar interactions, contact interactions, and superexchange mechanisms.
Susceptibility weighted imaging (SWI) uses phase information from gradient echo MRI sequences to enhance contrast between tissues with different magnetic susceptibility properties. SWI phase images are processed to remove background field inhomogeneities, leaving signal related to local susceptibility effects. The filtered phase images are then used to generate a phase mask that is multiplied with the original magnitude image, enhancing contrast between tissues like deoxygenated blood and iron deposits based on their magnetic properties. SWI provides sensitive detection of substances that alter local magnetic fields, with applications including neuroimaging to assess iron content changes in diseases.
The document summarizes the research activity of Antonio F. Di Rienzo over the past three years. It discusses using the lattice Boltzmann method to solve the radiative transfer equation, developing a lattice Boltzmann model for reactive flows that can account for large temperature variations, and a new link-wise artificial compressibility method beyond lattice Boltzmann.
The document discusses using gravitational wave waveform models to infer astrophysical properties from observations of gravitational wave events. It describes how waveform models encode information about binary black hole parameters like mass and spin, and how Bayesian inference can be used to estimate these parameters from the detected gravitational wave signal. It also addresses assessing confidence in detections and evaluating potential modeling systematics by comparing waveform models to numerical relativity simulations.
Mate 280 characterization of powders and porous materialsSami Ali
Particle size, surface area, and porosity are important characteristics that control many properties of materials. Particle size can be measured using techniques like sieving, sedimentation, light scattering, and gas adsorption which measure different parameters like size, surface area, or settling rate depending on the technique. Gas adsorption is commonly used to measure specific surface area and porosity by adsorbing gas molecules on the internal surface of porous materials.
The Quantum Theory Group at the University of Glasgow conducts research in several areas including quantum optics, quantum information, foundations of quantum mechanics, and light-matter interactions. The group has academic staff, research fellows, and PhD students. Specific research topics include boson sampling, quantum state discrimination, quantum measurement, open quantum systems, chiral molecules, optical angular momentum, and optical forces.
This document describes a fast and reliable method for surface wave tomography to estimate 2-D models of isotropic and azimuthally anisotropic velocity variations from regional or global surface wave data. The method inverts surface wave group or phase velocity measurements to produce tomographic maps in a spherical geometry. It allows for spatial smoothing and model amplitude constraints to be applied simultaneously. Examples applying this technique globally and regionally in Eurasia and Antarctica are presented.
This document describes a fast and reliable method for surface wave tomography to estimate 2-D models of isotropic and azimuthally anisotropic velocity variations from regional or global surface wave data. The method inverts surface wave group or phase velocity measurements to produce tomographic maps in a spherical geometry. It allows for spatial smoothing and model amplitude constraints to be applied simultaneously. Examples applying this technique globally and regionally in Eurasia and Antarctica are presented.
DISPERSION OF AEROSOLS IN ATMOSPHERIC FLUID FLOWijscmcj
This document summarizes a research paper that presents a mathematical model to study the dispersion of aerosols with and without chemical reaction in the presence of electric and magnetic fields. Key points:
- The model considers laminar flow of aerosols between two parallel plates with an applied electric and magnetic field.
- Governing equations for momentum, species concentration, electric potential, and Maxwell's equations are presented and solved numerically.
- Results are presented graphically to show the impact of reaction rate, electric field, and magnetic field (Hartmann number) on aerosol dispersion and concentration.
- The goal is to better understand aerosol dispersion under combined convection, diffusion, electric and magnetic
Nuclear magnetic imaging of the lungs can be performed using hyperpolarized noble gases. This technique uses resonance of polarized noble gas atoms in an external magnetic field to study the structure of the lungs, which can be used for diagnosing lung diseases. NMR and optical pumping techniques are used to polarize the noble gases for medical imaging applications. Specifically, optical pumping is used to hyperpolarize gases like helium-3 and xenon-129 outside of the MRI scanner, followed by injection into the patient and scanning to generate images of lung structure and function with improved sensitivity over conventional proton imaging.
Ultrasonic guided wave techniques have great potential for structural health monitoring applications. Appropriate mode and frequency selection is the basis for achieving optimised damage monitoring performance.
In this paper, several important guided wave mode attributes are
introduced in addition to the commonly used phase velocity and group velocity dispersion curves while using the general corrosion problem as an example. We first derive a simple and generic wave excitability function based on the theory of normal mode expansion and the reciprocity theorem. A sensitivity dispersion curve is formulated based on the group velocity dispersion curve. Both excitability and sensitivity dispersion curves are verified with finite element simulations. Finally, a
goodness dispersion curve concept is introduced to evaluate the tradeoffs between multiple mode selection objectives based on the wave velocity, excitability and sensitivity.
This document provides an overview of polymer analysis using mass spectrometry. It discusses what mass spectrometry is and the types of information it can provide about molecular mass and structure. It also describes how mass spectrometers work by introducing samples, ionizing them, analyzing the ions, and detecting them. Specific ionization methods like electrospray ionization and applications of mass spectrometry in areas like biotechnology and pharmaceuticals are summarized. The document concludes by outlining how mass spectrometry is used for polymer analysis by providing detailed structural and compositional information.
This document presents a novel algorithm for classifying signals (glitches) that arise in gravitational wave channels of the Laser Interferometer Gravitational-Wave Observatory (LIGO). The algorithm uses Kohonen Self Organizing Feature Maps and discrete wavelet transform coefficients to classify glitches based on their morphology and other parameters like signal-to-noise ratio and duration. This low-latency algorithm aims to help the LIGO detector characterization group identify and mitigate noise sources more quickly.
Introduction to Spectroscopy,
Introduction to UV, electronic transitions, terminology, chromophore, Auxochrome, Examples and Applications.
Introduction to IR, Fundamental vibrations, Types of Vibrations, Factors affecting the vibrational freaquencies, Group frequencies, examples and applications.
The potential of terahertz imaging for cancer diagnosisZahid Qaisar
This document reviews the potential of terahertz (THz) imaging and spectroscopy for cancer diagnosis. It begins with an introduction to THz radiation and the unique properties that make it suitable for medical applications, such as its non-ionizing nature. The document then discusses the principles and techniques of THz imaging and spectroscopy, including continuous wave and pulsed systems. It reviews investigations of THz imaging and spectroscopy for detecting various cancer types like skin, breast, cervical, and colon cancer. The document concludes that THz imaging could help combine macroscopic and microscopic imaging to better delineate cancer margins due to THz radiation's sensitivity to water and structural changes caused by cancer.
1) Numerical simulations of magnetized accretion disks show evidence of selective transport of large-scale magnetic field modes from the disk into the corona.
2) A greater fraction of magnetic energy in the corona is stored in these large-scale modes compared to the disk, where energy is more evenly distributed across scales.
3) Magnetic field anisotropy decreases with height from the disk midplane, and turbulence and buoyancy may work together to explain observed non-toroidal field configurations transported out of the disk.
MRI PHYSICS PART 3 Susceptibility-weighted images BY GKM .pptxGulshan Verma
Susceptibility-weighted imaging (SWI) is based on a fully flow compensated, high-resolution, 3D gradient echo method by integrating both magnitude and phase information.
It was previously referred to as high resolution blood oxygen level– dependent (BOLD) venography (HRBV), but because of its broader application than evaluating venous structures, it is now referred to as SWI.
Susceptibility-weighted imaging (SWI) allows detection and characterization of tissue components based on differences in their susceptibilities.
SWI sequences are typically acquired in 3D (rather than 2D) mode,
Allowing thinner slices, and Use Smaller voxel sizes,
Flow compensation in all three directions is used to reduce artifacts,
Parallel imaging is employed to reduce imaging time.
Either single or multiple echoes may be acquired in a given TR interval.
A key feature of SWI is that magnitude and phase information are independently processed/displayed as well as combined for diagnostic purposes
This document provides an introduction to nuclear magnetic resonance (NMR) spectroscopy. It begins with an overview of NMR and spectroscopy. It then reviews common units used in NMR such as time, temperature, magnetic field strength, energy, and frequency. The document consists of introductory chapters that cover topics like the basics of NMR, mathematics relevant to NMR, spin physics, and energy levels. It provides explanations of fundamental NMR concepts such as spin, magnetic moments, energy states, resonance frequency, and relaxation times T1 and T2. The overall document serves as a comprehensive primer on basic NMR principles.
Similar to on Thomesn's strange anisotropy parameter (20)
Here is a new 9-point scheme for finite difference solution of acoustic waves in frequency domain. The algorithm honors both accuracy and computational efficiency.
This document discusses how colour perception can impact the interpretation of seismic data. It notes that human vision processes colour in complex ways, and colour maps used in seismic interpretation can influence interpreters through visual effects like luminance, false contours, chromostereopsis, and simultaneous contrast. An experiment found significant variability between interpreters in delineating a geobody using different colour maps, with differences up to 235% in measured area. The document emphasizes that interpreters should be aware of how colour perception can affect their interpretive decisions.
This document provides an introduction to the least squares minimization method. It explains that least squares finds the model that best fits observed data by minimizing the sum of the squares of the differences between the observed and predicted data. It describes how least squares can be interpreted both as a best approximation estimate and as a maximum likelihood estimate. The document also discusses the geometric interpretation of least squares and practical considerations for its use such as checking that data is normally distributed.
Wide aperture reflection refraction profiling uses wide-angle reflected and diving wave energy to develop velocity models of seismic sections. It exploits long offset data to observe diving waves and wide-angle reflections that penetrate deeper than conventional methods. The technique involves first break tomography to obtain an initial velocity model, which is then refined through iterative forward modeling and matching of observed and calculated arrival times and amplitudes.
The document discusses the Levenberg Marquardt algorithm for solving nonlinear least squares problems. It begins by introducing nonlinear inverse problems and nonlinear least squares problems. It then describes the Gauss-Newton algorithm and challenges with nonlinear least squares problems like slow convergence and lack of convergence guarantees. The Levenberg Marquardt algorithm is presented as modifying the Gauss-Newton algorithm with a new parameter to address convergence issues. An example applying the gradient descent, Gauss-Newton, and Levenberg Marquardt algorithms to an earthquake location problem is provided showing the Levenberg Marquardt algorithm converges more quickly.
This document evaluates the performance of the Huber function in 1D frequency-domain full waveform inversion (FWI). It introduces FWI and discusses the l1 and l2 norms commonly used, as well as the Huber function which combines the two. The methodology section describes implementing the Huber function for 1D FWI, including gradient formulae. Results show FWI using the Huber function produces acceptable models for two synthetic velocity models, with the threshold parameter affecting the models differently based on their smoothness. The Huber function performs well in the presence of noise and balances the benefits of the l1 and l2 norms.
More from Inistute of Geophysics, Tehran university , Tehran/ iran (9)
Unlocking the mysteries of reproduction: Exploring fecundity and gonadosomati...AbdullaAlAsif1
The pygmy halfbeak Dermogenys colletei, is known for its viviparous nature, this presents an intriguing case of relatively low fecundity, raising questions about potential compensatory reproductive strategies employed by this species. Our study delves into the examination of fecundity and the Gonadosomatic Index (GSI) in the Pygmy Halfbeak, D. colletei (Meisner, 2001), an intriguing viviparous fish indigenous to Sarawak, Borneo. We hypothesize that the Pygmy halfbeak, D. colletei, may exhibit unique reproductive adaptations to offset its low fecundity, thus enhancing its survival and fitness. To address this, we conducted a comprehensive study utilizing 28 mature female specimens of D. colletei, carefully measuring fecundity and GSI to shed light on the reproductive adaptations of this species. Our findings reveal that D. colletei indeed exhibits low fecundity, with a mean of 16.76 ± 2.01, and a mean GSI of 12.83 ± 1.27, providing crucial insights into the reproductive mechanisms at play in this species. These results underscore the existence of unique reproductive strategies in D. colletei, enabling its adaptation and persistence in Borneo's diverse aquatic ecosystems, and call for further ecological research to elucidate these mechanisms. This study lends to a better understanding of viviparous fish in Borneo and contributes to the broader field of aquatic ecology, enhancing our knowledge of species adaptations to unique ecological challenges.
The technology uses reclaimed CO₂ as the dyeing medium in a closed loop process. When pressurized, CO₂ becomes supercritical (SC-CO₂). In this state CO₂ has a very high solvent power, allowing the dye to dissolve easily.
Remote Sensing and Computational, Evolutionary, Supercomputing, and Intellige...University of Maribor
Slides from talk:
Aleš Zamuda: Remote Sensing and Computational, Evolutionary, Supercomputing, and Intelligent Systems.
11th International Conference on Electrical, Electronics and Computer Engineering (IcETRAN), Niš, 3-6 June 2024
Inter-Society Networking Panel GRSS/MTT-S/CIS Panel Session: Promoting Connection and Cooperation
https://www.etran.rs/2024/en/home-english/
Or: Beyond linear.
Abstract: Equivariant neural networks are neural networks that incorporate symmetries. The nonlinear activation functions in these networks result in interesting nonlinear equivariant maps between simple representations, and motivate the key player of this talk: piecewise linear representation theory.
Disclaimer: No one is perfect, so please mind that there might be mistakes and typos.
dtubbenhauer@gmail.com
Corrected slides: dtubbenhauer.com/talks.html
Phenomics assisted breeding in crop improvementIshaGoswami9
As the population is increasing and will reach about 9 billion upto 2050. Also due to climate change, it is difficult to meet the food requirement of such a large population. Facing the challenges presented by resource shortages, climate
change, and increasing global population, crop yield and quality need to be improved in a sustainable way over the coming decades. Genetic improvement by breeding is the best way to increase crop productivity. With the rapid progression of functional
genomics, an increasing number of crop genomes have been sequenced and dozens of genes influencing key agronomic traits have been identified. However, current genome sequence information has not been adequately exploited for understanding
the complex characteristics of multiple gene, owing to a lack of crop phenotypic data. Efficient, automatic, and accurate technologies and platforms that can capture phenotypic data that can
be linked to genomics information for crop improvement at all growth stages have become as important as genotyping. Thus,
high-throughput phenotyping has become the major bottleneck restricting crop breeding. Plant phenomics has been defined as the high-throughput, accurate acquisition and analysis of multi-dimensional phenotypes
during crop growing stages at the organism level, including the cell, tissue, organ, individual plant, plot, and field levels. With the rapid development of novel sensors, imaging technology,
and analysis methods, numerous infrastructure platforms have been developed for phenotyping.
When I was asked to give a companion lecture in support of ‘The Philosophy of Science’ (https://shorturl.at/4pUXz) I decided not to walk through the detail of the many methodologies in order of use. Instead, I chose to employ a long standing, and ongoing, scientific development as an exemplar. And so, I chose the ever evolving story of Thermodynamics as a scientific investigation at its best.
Conducted over a period of >200 years, Thermodynamics R&D, and application, benefitted from the highest levels of professionalism, collaboration, and technical thoroughness. New layers of application, methodology, and practice were made possible by the progressive advance of technology. In turn, this has seen measurement and modelling accuracy continually improved at a micro and macro level.
Perhaps most importantly, Thermodynamics rapidly became a primary tool in the advance of applied science/engineering/technology, spanning micro-tech, to aerospace and cosmology. I can think of no better a story to illustrate the breadth of scientific methodologies and applications at their best.
The binding of cosmological structures by massless topological defectsSérgio Sacani
Assuming spherical symmetry and weak field, it is shown that if one solves the Poisson equation or the Einstein field
equations sourced by a topological defect, i.e. a singularity of a very specific form, the result is a localized gravitational
field capable of driving flat rotation (i.e. Keplerian circular orbits at a constant speed for all radii) of test masses on a thin
spherical shell without any underlying mass. Moreover, a large-scale structure which exploits this solution by assembling
concentrically a number of such topological defects can establish a flat stellar or galactic rotation curve, and can also deflect
light in the same manner as an equipotential (isothermal) sphere. Thus, the need for dark matter or modified gravity theory is
mitigated, at least in part.
The debris of the ‘last major merger’ is dynamically youngSérgio Sacani
The Milky Way’s (MW) inner stellar halo contains an [Fe/H]-rich component with highly eccentric orbits, often referred to as the
‘last major merger.’ Hypotheses for the origin of this component include Gaia-Sausage/Enceladus (GSE), where the progenitor
collided with the MW proto-disc 8–11 Gyr ago, and the Virgo Radial Merger (VRM), where the progenitor collided with the
MW disc within the last 3 Gyr. These two scenarios make different predictions about observable structure in local phase space,
because the morphology of debris depends on how long it has had to phase mix. The recently identified phase-space folds in Gaia
DR3 have positive caustic velocities, making them fundamentally different than the phase-mixed chevrons found in simulations
at late times. Roughly 20 per cent of the stars in the prograde local stellar halo are associated with the observed caustics. Based
on a simple phase-mixing model, the observed number of caustics are consistent with a merger that occurred 1–2 Gyr ago.
We also compare the observed phase-space distribution to FIRE-2 Latte simulations of GSE-like mergers, using a quantitative
measurement of phase mixing (2D causticality). The observed local phase-space distribution best matches the simulated data
1–2 Gyr after collision, and certainly not later than 3 Gyr. This is further evidence that the progenitor of the ‘last major merger’
did not collide with the MW proto-disc at early times, as is thought for the GSE, but instead collided with the MW disc within
the last few Gyr, consistent with the body of work surrounding the VRM.
4. Seismic isotropy vs anisotropy
Isotropy comes from the Greek words isos (equal)
and tropos (way) and means uniform in all
directions. Isotropic materials like glass exhibit the
same material properties in all directions.
Physical properties are
not direction dependent.
Anisotropy: physical properties
are direction dependent.
Thomsen,
2014
Seismic anisotropy:
The dependent of seismic velocity
upon angle.
4
5. Beyond the simple definition of seismic anisotropy
a) isotropic, b) anisotropic media
Shear wave splitting in anisotropic media
https://en.wikipedia.org/wiki/Shear_wave_splitting
5
6. Beyond the simple definition of seismic anisotropy
isotropic wave equation simulation
6
anisotropic wave equation simulation
7. Lets go further: mathematical description Elastic media
Linear Elastic material and Hook’s law
7
8. seismic anisotropy media
(Transverse Isotropy)
Common coring configuration for anisotropy
measurements. Meléndez-Martínez (2014).
Rüger, 1997
http://www.glossary.oilfield.slb.com
8
13. Weak elastic anisotropy and “anisotropies”
We can recast the equation for P-SV and SH waves velocity by
using notation involving:
1 – two elastic moduli (Vertical P- and S-waves velocity )
+
2- Three measures of anisotropy (We call them anisotropies).
131 – Simplifying Velocity equations for
types of waves
2 – anisotropies are non-dimensional.
So, one may speak of X percent
anisotropy.
3 – reduce to zero in the degenerate
case of isotropy. So, material with small
value of anisotropy may be denoted:
“Weakly anisotropic”
14. 14
the algebraic complexity of new equations impedes a clear
understanding of their physical content. Progress may be
made. however, by observing that most
rocks are only weakly anisotropic. even though many of
their constituent minerals are highly anisotropic (Thomsen,
1986).
Thomsen’s revolution
14
15. Weak anisotropy
Based on laboratory data Thomsen (1986) showed
that most of evaluated rocks have anisotropy in the
weak to moderate (i.e. less than 0.2 ).
We use Taylor series to expand equations for
anisotropies and just retain only linear terms.
Now, we can talk about anisotropies
based on angles.
15
17. By measurements at 0, 45 and 90 degrees we
have:
Lets consider error propagation in 𝛿 17
18. For more information about previous topics see:
Thomsen, L., [2014], Understanding Seismic Anisotropy
in Exploration and Exploitation, Second edition, SEG.
Tsvankin, I.,[2012] Seismic Signatures and Analysis of
Reflection Data in Anisotropic Media
,Third Edition, Society of exploration Geophysicists.
And related references.
Leon Thomsen
Ilya Tsvankin
24. Schema of deformation of vertical plug
(left) and horizontal plug (right)
of organic shale under axial
compressional testing. Dark grey
represents plugs
before deformation, and light grey
represents plugs after deformation
24
25. 1- measurements
uncertainty
2 – material should not
be classified as a TI
medium
If a TI medium is
infinitely stronger in
the horizontal direction
compared with vertical
direction.
𝜐 𝐻𝑉 → 1
25
30. Constraints on anellipticity parameter
“anellipticity”
parameter (η) that describes the degree of deviation from
elliptic anisotropy.
The anellipticity parameter η is important for anisotropic
seismic data processing because it determines the relation
between the normal moveout velocity and the horizontal
velocity (Tsvankin 2012).
30
Lower and upper bounds for 𝜂
31. Laboratory data and the constraints 31
several data points have negative values. The
corresponding 𝛿 values are above the high bound,
and they tend to have higher values of δ.
𝜐 𝐻𝐻
here are quite a few points with 𝜐 𝐻𝐻 > 𝜐 𝐻𝑉 . The
corresponding values are lower than the low bound,
and they tend to have lower values of δ .
𝐶13
About two-thirds of the data points lie in the center area,
where believed that all the hydrocarbon source rocks with
TI anisotropy should lie within.
There is some uncertainty in the middle part of the figure.
there are more data points lying below the low bound than
above the high bound.
32. Uncertainty in labratory velocity anisotropy measurements
Laboratory velocity anisotropy measurement on TI media
requires at least five velocity component measurements,
among which one velocity measurement must be made in
oblique direction.
32
33. Case I: Negative angle error
make about 20% of the data points lie
below the low bound
2 𝑜
Case II: Negative 5 𝑜
angle error
make about 62% of the data points
lie below the low bound
Case III: Positive 5 𝑜 angle error
make
about less than 8% of the data points lie above
the high bound
1) If the phase velocity in 45 𝑜
is
underestimated by 1%, 22% of the data
points move below the low bound.
2) If the phase velocity in 45 𝑜
is
overestimated by 1%, only one data
point moves above the high bound
33
34. difference between group and phase velocities
ray tracing of ultrasonic velocity measurement on the 45ºplug (left),
transmission time versus angle (right)
34
I. if the transducer is not wide enough (or
the sample is too long), the first
arriving energy might be missed by the
receiving transducer and the phase
velocity tends to be underestimated
37. 37
Histogram of from
laboratory velocity anisotropy measurement.
𝐶11−2𝐶66
here are only 2 data points with
One data point is due to data entry error and the other data
point is due to signals of substandard quality.
𝐶11−2𝐶66 < 0
37
38. The trends of the approximated bounds comply well with the laboratory
measured data if data points lying outside of the δ bounds are not displayed.
𝛿+
𝛿−
Ignore data
points outside
the bounds
38
40. Conclusions
The physical constraints on the Thomsen parameter δ can help us understand the relation
between δ and the other Thomsen parameters.
Generally, δ increases with ε and decreases with increasing γ . Variation of β0/α0 of the
hydrocarbon source rocks in a certain area is usually small so that δ is less sensitive to β0/α0.
δ can be approximately predicted by the other Thomsen parameters .
Using these constraints, there exist significant uncertainties in laboratory velocity anisotropy
measurement.