This document discusses seismic data processing workflows. It begins with an introduction and agenda. The general workflow includes reformatting, trace editing, geometry handling, amplitude recovery, noise attenuation through techniques like frequency and FK filtering, deconvolution, multiple removal, migration, velocity analysis, NMO correction, muting, stacking, and post-stack filtering and amplitude scaling to produce a final image for geological interpretation. The document emphasizes that the proper workflow selection depends on processing environment, targets, costs, and client preferences. It concludes with time for questions.
2 d and 3d land seismic data acquisition and seismic data processingAli Mahroug
The seismic method has three important/principal applications
a. Delineation of near-surface geology for engineering studies, and coal and mineral
exploration within a depth of up to 1km: the seismic method applied to the near –
surface studies is known as engineering seismology.
b. Hydrocarbon exploration and development within a depth of up to 10 km: seismic
method applied to the exploration and development of oil and gas fields is known
as exploration seismology.
c. Investigation of the earth’s crustal structure within a depth of up to 100 km: the
seismic method applies to the crustal and earthquake studies is known as
earthquake seismology.
2 d and 3d land seismic data acquisition and seismic data processingAli Mahroug
The seismic method has three important/principal applications
a. Delineation of near-surface geology for engineering studies, and coal and mineral
exploration within a depth of up to 1km: the seismic method applied to the near –
surface studies is known as engineering seismology.
b. Hydrocarbon exploration and development within a depth of up to 10 km: seismic
method applied to the exploration and development of oil and gas fields is known
as exploration seismology.
c. Investigation of the earth’s crustal structure within a depth of up to 100 km: the
seismic method applies to the crustal and earthquake studies is known as
earthquake seismology.
The analysis of all of the significant processes that formed a basin and deformed its sedimentary fill from basin-scale processes (e.g., plate tectonics)
to centimeter-scale processes (e.g., fracturing)
is one of the first steps in
searching for oil and gas resources that directly
affects the land and the landowners Seismic surveys are like sonar on steroids They are based on recording the time it takes for sound waves generated by controlled energy sources .The survey usually requires people and machinery
to be on private property and may result in
disturbances of the land such as the clearing of
trees
Filtering in seismic data processing? How filtering help to suppress noises. Haseeb Ahmed
To enhance the signal-Noise ratio different techniques are used to remove the noises.
Types of Seismic Filtering:
1- Frequency Filtering.
2- Inverse Filtering (Deconvolution).
3- Velocity Filtering.
Tutorial: The Role of Event-Time Analysis Order in Data StreamingVincenzo Gulisano
Slides for our tutorial, titled “The Role of Event-Time Analysis Order in Data Streaming”, presented at the 14th ACM International Conference on Distributed and Event-Based Systems (DEBS) conference. We have recorded the tutorial, and you can find the videos at the following links:
Part 1: https://youtu.be/SW_WS6ULsdY
Part 2: https://youtu.be/bq3ECNvPwOU
You can find this slides, as well as the code examples, at https://github.com/vincenzo-gulisano/debs2020_tutorial_event_time and at SlideS
The analysis of all of the significant processes that formed a basin and deformed its sedimentary fill from basin-scale processes (e.g., plate tectonics)
to centimeter-scale processes (e.g., fracturing)
is one of the first steps in
searching for oil and gas resources that directly
affects the land and the landowners Seismic surveys are like sonar on steroids They are based on recording the time it takes for sound waves generated by controlled energy sources .The survey usually requires people and machinery
to be on private property and may result in
disturbances of the land such as the clearing of
trees
Filtering in seismic data processing? How filtering help to suppress noises. Haseeb Ahmed
To enhance the signal-Noise ratio different techniques are used to remove the noises.
Types of Seismic Filtering:
1- Frequency Filtering.
2- Inverse Filtering (Deconvolution).
3- Velocity Filtering.
Tutorial: The Role of Event-Time Analysis Order in Data StreamingVincenzo Gulisano
Slides for our tutorial, titled “The Role of Event-Time Analysis Order in Data Streaming”, presented at the 14th ACM International Conference on Distributed and Event-Based Systems (DEBS) conference. We have recorded the tutorial, and you can find the videos at the following links:
Part 1: https://youtu.be/SW_WS6ULsdY
Part 2: https://youtu.be/bq3ECNvPwOU
You can find this slides, as well as the code examples, at https://github.com/vincenzo-gulisano/debs2020_tutorial_event_time and at SlideS
Key Learning Objectives:
- Identify the biggest time-consuming activities that occur in the Gas Chromatography-Mass Spectrometry (GC-MS) workflow
- Learn a modern approach to minimize the time an operator spends on the data review, reporting, and complex method development
Overview:
In the routine workflow of daily GC-MS operations, analysts spend the majority of their workday reviewing data and conducting maintenance activities. Today, many laboratories are also exploring the addition of MS/MS capabilities. Add the MS/MS dimension along with more complex method development to this workflow, and the analyst’s workload becomes even more challenging.
How can we mitigate this challenge? In this web seminar, we will demonstrate how the efficiency of data analysis can be improved through dynamic, interactive GC-MS data review and automated MS/MS method development. Additionally, we will illustrate some innovative ways to minimize downtime on the instrument for maintenance activities, whether planned or unplanned, to help alleviate this burden on the analyst. Common challenges and corresponding solutions will be presented throughout.
For more information: http://www.thermoscientific.com/isq
Crash course on data streaming (with examples using Apache Flink)Vincenzo Gulisano
These are the slides I used for a crash course (4 hours) on data streaming. It contains both theory / research aspects as well as examples based on Apache Flink (DataStream API)
Time v Frequency Domain Analysis For Large Automotive SystemsAltair
It has been recognised since the 1960’s that the frequency domain method for structural analysis offers superior qualitative information about structural response; But computational and technological issues have held back the implementation for fatigue calculation until now. Recent technological developments have now enabled the practical implementation of the frequency domain approach and this paper will demonstrate this, with particular reference to the technology limitations that have been overcome, the resultant performance advantages, and accuracy. These techniques are of relevance to all the large automotive OEM’s as well as aerospace T1 suppliers and example case studies from these companies will be included.
Backscatter Working Group Software Inter-comparison ProjectRequesting and Co...Giuseppe Masetti
Backscatter mosaics of the seafloor are now routinely produced from multibeam sonar data, and used in a wide range of marine applications. However, significant differences (up to 5 dB) have been observed between the levels of mosaics produced by different software processing a same dataset. This is a major detriment to several possible uses of backscatter mosaics, including quantitative analysis, monitoring seafloor change over time, and combining mosaics. A recently concluded international Backscatter Working Group (BSWG) identified this issue and recommended that “to check the consistency of the processing results provided by various software suites, initiatives promoting comparative tests on common data sets should be encouraged […]”. However, backscatter data processing is a complex (and often proprietary) sequence of steps, so that simply comparing end-results between software does not provide much information as to the root cause of the differences between results.
In order to pinpoint the source(s) of inconsistency between software, it is necessary to understand at which stage(s) of the data processing chain do the differences become substantial. We have invited willing software developers to discuss this framework and collectively adopt a list of intermediate processing steps. We provided a small dataset consisting of various seafloor types surveyed with the same multibeam sonar system, using constant acquisition settings and sea conditions, and have the software developers generate these intermediate processing results, to be eventually compared. If the experiment proves fruitful, we may extend it to more datasets, software and intermediate results. Eventually, software developers may consider making the results from intermediate stages a standard output as well as adhering to a consistent terminology, as advocated by Schimel et al. (2018). To date, the developers of four software (Sonarscope, QPS FMGT, CARIS SIPS, MB Process) have expressed their interest in collaborating on this project.
This presentation is primarily based on Oracle's "Java SE 6 HotSpot™ Virtual Machine Garbage Collection Tuning" document.
This introduces how Java manages memory using generations, available garbage collectors and how to tune them to achieve desired performance.
The Remarkable Benefits and Grave Dangers of using Artificial Intelligence in...Steve Cuddy
Overview
What is Artificial Intelligence (AI)
Petrophysical Case Studies showing successful applications
- Evolution of shaly water saturation equations
- Nuclear Magnetic Resonance T1 & T2 spectra analysis
- Prediction of shear velocities
- Litho-facies and permeability prediction
- The log quality control and repair of electrical logs
Narrow vs. General vs. True AI
The grave dangers of using AI
- More than AI making poor petrophysical predictions!
- I describe an end of civilisation scenario
Salas, V. (2024) "John of St. Thomas (Poinsot) on the Science of Sacred Theol...Studia Poinsotiana
I Introduction
II Subalternation and Theology
III Theology and Dogmatic Declarations
IV The Mixed Principles of Theology
V Virtual Revelation: The Unity of Theology
VI Theology as a Natural Science
VII Theology’s Certitude
VIII Conclusion
Notes
Bibliography
All the contents are fully attributable to the author, Doctor Victor Salas. Should you wish to get this text republished, get in touch with the author or the editorial committee of the Studia Poinsotiana. Insofar as possible, we will be happy to broker your contact.
Richard's aventures in two entangled wonderlandsRichard Gill
Since the loophole-free Bell experiments of 2020 and the Nobel prizes in physics of 2022, critics of Bell's work have retreated to the fortress of super-determinism. Now, super-determinism is a derogatory word - it just means "determinism". Palmer, Hance and Hossenfelder argue that quantum mechanics and determinism are not incompatible, using a sophisticated mathematical construction based on a subtle thinning of allowed states and measurements in quantum mechanics, such that what is left appears to make Bell's argument fail, without altering the empirical predictions of quantum mechanics. I think however that it is a smoke screen, and the slogan "lost in math" comes to my mind. I will discuss some other recent disproofs of Bell's theorem using the language of causality based on causal graphs. Causal thinking is also central to law and justice. I will mention surprising connections to my work on serial killer nurse cases, in particular the Dutch case of Lucia de Berk and the current UK case of Lucy Letby.
The ability to recreate computational results with minimal effort and actionable metrics provides a solid foundation for scientific research and software development. When people can replicate an analysis at the touch of a button using open-source software, open data, and methods to assess and compare proposals, it significantly eases verification of results, engagement with a diverse range of contributors, and progress. However, we have yet to fully achieve this; there are still many sociotechnical frictions.
Inspired by David Donoho's vision, this talk aims to revisit the three crucial pillars of frictionless reproducibility (data sharing, code sharing, and competitive challenges) with the perspective of deep software variability.
Our observation is that multiple layers — hardware, operating systems, third-party libraries, software versions, input data, compile-time options, and parameters — are subject to variability that exacerbates frictions but is also essential for achieving robust, generalizable results and fostering innovation. I will first review the literature, providing evidence of how the complex variability interactions across these layers affect qualitative and quantitative software properties, thereby complicating the reproduction and replication of scientific studies in various fields.
I will then present some software engineering and AI techniques that can support the strategic exploration of variability spaces. These include the use of abstractions and models (e.g., feature models), sampling strategies (e.g., uniform, random), cost-effective measurements (e.g., incremental build of software configurations), and dimensionality reduction methods (e.g., transfer learning, feature selection, software debloating).
I will finally argue that deep variability is both the problem and solution of frictionless reproducibility, calling the software science community to develop new methods and tools to manage variability and foster reproducibility in software systems.
Exposé invité Journées Nationales du GDR GPL 2024
Seminar of U.V. Spectroscopy by SAMIR PANDASAMIR PANDA
Spectroscopy is a branch of science dealing the study of interaction of electromagnetic radiation with matter.
Ultraviolet-visible spectroscopy refers to absorption spectroscopy or reflect spectroscopy in the UV-VIS spectral region.
Ultraviolet-visible spectroscopy is an analytical method that can measure the amount of light received by the analyte.
What is greenhouse gasses and how many gasses are there to affect the Earth.moosaasad1975
What are greenhouse gasses how they affect the earth and its environment what is the future of the environment and earth how the weather and the climate effects.
Nutraceutical market, scope and growth: Herbal drug technologyLokesh Patil
As consumer awareness of health and wellness rises, the nutraceutical market—which includes goods like functional meals, drinks, and dietary supplements that provide health advantages beyond basic nutrition—is growing significantly. As healthcare expenses rise, the population ages, and people want natural and preventative health solutions more and more, this industry is increasing quickly. Further driving market expansion are product formulation innovations and the use of cutting-edge technology for customized nutrition. With its worldwide reach, the nutraceutical industry is expected to keep growing and provide significant chances for research and investment in a number of categories, including vitamins, minerals, probiotics, and herbal supplements.
Deep Behavioral Phenotyping in Systems Neuroscience for Functional Atlasing a...Ana Luísa Pinho
Functional Magnetic Resonance Imaging (fMRI) provides means to characterize brain activations in response to behavior. However, cognitive neuroscience has been limited to group-level effects referring to the performance of specific tasks. To obtain the functional profile of elementary cognitive mechanisms, the combination of brain responses to many tasks is required. Yet, to date, both structural atlases and parcellation-based activations do not fully account for cognitive function and still present several limitations. Further, they do not adapt overall to individual characteristics. In this talk, I will give an account of deep-behavioral phenotyping strategies, namely data-driven methods in large task-fMRI datasets, to optimize functional brain-data collection and improve inference of effects-of-interest related to mental processes. Key to this approach is the employment of fast multi-functional paradigms rich on features that can be well parametrized and, consequently, facilitate the creation of psycho-physiological constructs to be modelled with imaging data. Particular emphasis will be given to music stimuli when studying high-order cognitive mechanisms, due to their ecological nature and quality to enable complex behavior compounded by discrete entities. I will also discuss how deep-behavioral phenotyping and individualized models applied to neuroimaging data can better account for the subject-specific organization of domain-general cognitive systems in the human brain. Finally, the accumulation of functional brain signatures brings the possibility to clarify relationships among tasks and create a univocal link between brain systems and mental functions through: (1) the development of ontologies proposing an organization of cognitive processes; and (2) brain-network taxonomies describing functional specialization. To this end, tools to improve commensurability in cognitive science are necessary, such as public repositories, ontology-based platforms and automated meta-analysis tools. I will thus discuss some brain-atlasing resources currently under development, and their applicability in cognitive as well as clinical neuroscience.
Remote Sensing and Computational, Evolutionary, Supercomputing, and Intellige...University of Maribor
Slides from talk:
Aleš Zamuda: Remote Sensing and Computational, Evolutionary, Supercomputing, and Intelligent Systems.
11th International Conference on Electrical, Electronics and Computer Engineering (IcETRAN), Niš, 3-6 June 2024
Inter-Society Networking Panel GRSS/MTT-S/CIS Panel Session: Promoting Connection and Cooperation
https://www.etran.rs/2024/en/home-english/
Comparing Evolved Extractive Text Summary Scores of Bidirectional Encoder Rep...University of Maribor
Slides from:
11th International Conference on Electrical, Electronics and Computer Engineering (IcETRAN), Niš, 3-6 June 2024
Track: Artificial Intelligence
https://www.etran.rs/2024/en/home-english/
Earliest Galaxies in the JADES Origins Field: Luminosity Function and Cosmic ...Sérgio Sacani
We characterize the earliest galaxy population in the JADES Origins Field (JOF), the deepest
imaging field observed with JWST. We make use of the ancillary Hubble optical images (5 filters
spanning 0.4−0.9µm) and novel JWST images with 14 filters spanning 0.8−5µm, including 7 mediumband filters, and reaching total exposure times of up to 46 hours per filter. We combine all our data
at > 2.3µm to construct an ultradeep image, reaching as deep as ≈ 31.4 AB mag in the stack and
30.3-31.0 AB mag (5σ, r = 0.1” circular aperture) in individual filters. We measure photometric
redshifts and use robust selection criteria to identify a sample of eight galaxy candidates at redshifts
z = 11.5 − 15. These objects show compact half-light radii of R1/2 ∼ 50 − 200pc, stellar masses of
M⋆ ∼ 107−108M⊙, and star-formation rates of SFR ∼ 0.1−1 M⊙ yr−1
. Our search finds no candidates
at 15 < z < 20, placing upper limits at these redshifts. We develop a forward modeling approach to
infer the properties of the evolving luminosity function without binning in redshift or luminosity that
marginalizes over the photometric redshift uncertainty of our candidate galaxies and incorporates the
impact of non-detections. We find a z = 12 luminosity function in good agreement with prior results,
and that the luminosity function normalization and UV luminosity density decline by a factor of ∼ 2.5
from z = 12 to z = 14. We discuss the possible implications of our results in the context of theoretical
models for evolution of the dark matter halo mass function.
2. • Personal Introduction.
• Objective of Seismic Processing.
• General Workflow of Seismic Processing.
• Selection of Proper Processing Workflow.
• Q & A
2
Agenda
3. • Personal Introduction.
• Objective of Seismic Processing.
• General Workflow of Seismic Processing.
• Selection of Proper Processing Workflow.
• Q & A
3
Agenda
4. 4
Ahmed Osama Ahmed
• 3rd
Level Student at Faculty of Science-ASU, Geology Dept.
• President of AAPG-ASUSC 2016/2017.
• Online Freelance (Arabic <-> English) Translator.
5. • Personal Introduction.
• Objective of Seismic Processing.
• General Workflow of Seismic Processing.
• Selection of Proper Processing Workflow.
• Q & A
5
Agenda
7. • Personal Introduction.
• Objective of Seismic Processing.
• General Workflow of Seismic Processing.
• Selection of Proper Processing Workflow.
• Q & A
7
Agenda
36. 36
• Algorithm process to reverse the effect of Convolution.
• Aims to the following:
1. Shaping the wavelet.
2. Improving the resolution of the data.
3. Attenuate some multiples.
DeconvolutionDeconvolution
45. 45
MigrationMigration
The aim of this major process is to remove the effect of the beds’ curvature
and to relocate the energy to its true position.
54. 54
MuteMute
After the NMO Correction the data will be stretched so some noisy parts of it will make
no sense and here comes the role of the Muting process to remove these parts.
60. 60
FilteringFiltering Amplitude ScalingAmplitude Scaling
Balancing the amplitudesBalancing the amplitudes
of the stacked sectionof the stacked section
horizontally or vertically tohorizontally or vertically to
facilitate the interpretationfacilitate the interpretation
process laterprocess later
High Cut FilterHigh Cut Filter Low Cut FilterLow Cut Filter
Band Bass FilterBand Bass Filter
Passband
F1F2
Passband
F3
Notch FilterNotch Filter
1- Introducing Myself
2- Seismic Stages & Why Processing?
3- Simple Workflow with Domains
4- How we select the best proper sequence of Processing ?
5- Any Questions !!
Introduction
Seismic is performed on 3 main stages ..
Acquisition is Recording the data in the field.
Processing is enhancing the clearance of the field data to make a better image of the subsurface to facilitate the interpreter job.
Interpretation is conversion of seismic into geological model.
We start with reformatting the field data .. And end with final stacking for CMPs and see geology.
We begin with reformatting the field data from seg-d or seg-y into the internal format .. As for WG we use Omega software which deals with DIO format .. So we change the field format into DIO to start dealing with it by Omega software.
In the field during recording the data .. We might have some instrumental failures which leads into bad recording forming dead traces or maybe bad shots
We edit traces by reversing the polarity of them (1st method)
Or by the removal of the whole trace using AAA (2nd method)
Or by deletion of the whole shot in case we couldn’t remove the dead trace by aaa (3rd method)
Taking notes of the numbering of dead traces within shots to check them after we apply the noise attenuation methods, if they are removed so be it, but if not, we’ll remove the whole shot.
We receive 2 parts of data from the field …
SPS/UKOOA files & Seismic traces ..
SPS/UKOOA files contain coordinates and elevations of shot points, lines and receiver points, lines….
We merge these files with the seismic traces so we update each trace with his location information in the field ..
We’re sorting the seismic data (traces) into different domains .. Coz in each domain we apply some process which can’t be applied in any other domain..
When the seismic source transmits waves (energy) … with increasing the depth there’s a loss in the submitted energy due to several factors such as spherical divergence, absorption and decay ..
So we need to balance the energy within the whole shot gather .. Which leads to the amplitude recovery process but with keeping the original amplitudes untouched we just make a compensation for the amplitudes in the deeper part of the shot..
Random Noise: energy that doesn’t exhibit correlation from trace to another trace (has no specific shape), can’t be predicted
Coherent Noise: source generated, can be predicted though the shot traces.
To increase S/N ratio..
Attenuation of noises with preserving the data as possible as we can..
According to the frequency content, we design a filter to remove all frequencies of the noise and keep only the frequency of the required data
1- We transform the data from time-offset domain to frequency-wave no. domain by Fourier Transform:
The Fourier Transform is simply a mathematical process that allows us to take a function of time (a seismic trace) and express it as a function of frequency (a spectrum).
2- According to the dip of the data and the dip of the noise, we design a filter to remove noise with preserving the data untouched
((( We determine the dip of the data and we design the filter to remove anything else )))
This filter removes the linear noise with certain range of dips and velocities
Anomalous Amplitude Attenuation ..
We make time windows out of the traces .. We determine the amplitude range for each window and anything higher the average amplitude is either removed or replaced by the average amplitude..
When the source send the energy into the subsurface, the submitted wave has the form of Wavelet and when it hits the geology (RC) it reflects and be recorded by receivers in the form of traces which result from interaction between wavelet and RC.
So we’re here to reverse this process .. We aim to remove the wavelet from the trace to get a better resolution of the RC (Geology)
We transform the data from TX domain to Tau-P domain using the radon transform .. Then we separate the real data from the multiples and remove the multiples with preserving the data.