By leveraging self-organized maps and seismic inversion products, it is possible to:
1) More rapidly localize interpretation in a dataset while utilizing specialized geological knowledge.
2) Potentially make substantial gains in both efficiency and interpretation quality.
3) Increase the engagement of non-specialists.
The method classifies inversion results with an unbiased algorithm to find anomalous areas for specialists to focus on, helping maximize their time. However, specialists should still understand the physics behind any anomalies found.
Detecing facial keypoints is a very challenging problem. Facial features vary greatly from one individual to another, and even for a single individual, there is a large amount of variation due to 3D pose, size, position, viewing angle, and illumination conditions. Computer vision research has come a long way in addressing these difficulties, but there remain many oppurtunities for improvement.
In this presentation we have used different methods to recognize facial keypoints and compared their RMSE (Root Mean Square Errors) to get better results and accuracy.
Detecing facial keypoints is a very challenging problem. Facial features vary greatly from one individual to another, and even for a single individual, there is a large amount of variation due to 3D pose, size, position, viewing angle, and illumination conditions. Computer vision research has come a long way in addressing these difficulties, but there remain many oppurtunities for improvement.
In this presentation we have used different methods to recognize facial keypoints and compared their RMSE (Root Mean Square Errors) to get better results and accuracy.
Survey for recursive neural networks. Including recursive neural network (RNN), recursive autoencoder (RAE), unfolding RAE & dynamic pooling, matrix-vector RNN (MV-RNN), and recursive neural tensor network (RNTN), published by Socher et al.
Lab seminar on
- Sharpness-Aware Minimization for Efficiently Improving Generalization (ICLR 2021)
- When Vision Transformers Outperform ResNets without Pretraining or Strong Data Augmentations (under review)
My slides from the minds mastering machines conference 2018 in Cologne about Deep Learning and Mathematical Optimization, the methods that are used for training Neural Nets and how they perform with respect to Training and especially Learning, i.e. how well the trained predictors generalize
Research Inventy : International Journal of Engineering and Science is published by the group of young academic and industrial researchers with 12 Issues per year. It is an online as well as print version open access journal that provides rapid publication (monthly) of articles in all areas of the subject such as: civil, mechanical, chemical, electronic and computer engineering as well as production and information technology. The Journal welcomes the submission of manuscripts that meet the general criteria of significance and scientific excellence. Papers will be published by rapid process within 20 days after acceptance and peer review process takes only 7 days. All articles published in Research Inventy will be peer-reviewed.
Survey for recursive neural networks. Including recursive neural network (RNN), recursive autoencoder (RAE), unfolding RAE & dynamic pooling, matrix-vector RNN (MV-RNN), and recursive neural tensor network (RNTN), published by Socher et al.
Lab seminar on
- Sharpness-Aware Minimization for Efficiently Improving Generalization (ICLR 2021)
- When Vision Transformers Outperform ResNets without Pretraining or Strong Data Augmentations (under review)
My slides from the minds mastering machines conference 2018 in Cologne about Deep Learning and Mathematical Optimization, the methods that are used for training Neural Nets and how they perform with respect to Training and especially Learning, i.e. how well the trained predictors generalize
Research Inventy : International Journal of Engineering and Science is published by the group of young academic and industrial researchers with 12 Issues per year. It is an online as well as print version open access journal that provides rapid publication (monthly) of articles in all areas of the subject such as: civil, mechanical, chemical, electronic and computer engineering as well as production and information technology. The Journal welcomes the submission of manuscripts that meet the general criteria of significance and scientific excellence. Papers will be published by rapid process within 20 days after acceptance and peer review process takes only 7 days. All articles published in Research Inventy will be peer-reviewed.
Presentatie door Pooyan Ghasemi en Mario Martinelli, Deltares, op de Geo Klantendag 2018, tijdens de Deltares Software Dagen - Editie 2018. Donderdag, 7 juni 2018, Delft.
Lessons learnt at building recommendation services at industry scaleDomonkos Tikk
Industry day keynote presentation held at ECIR 2016, Padova. The talk presents algorithmic, technical and business challenges Gravity R&D encountered from building a recommender system vendor company from being a top Netflix Prize contender.
Using SigOpt to Tune Deep Learning Models with Nervana CloudSigOpt
In this talk I'll show how the Bayesian Optimization methods used by SigOpt, coupled with the incredibly scalable deep learning architecture provided with ncloud and neon, allow anyone it easily tune their models to quickly achieve higher accuracy. I'll walk through the techniques and show an explicit example with results.
Using Feature Grouping as a Stochastic Regularizer for High Dimensional Noisy...WiMLDSMontreal
"Using Feature Grouping as a Stochastic Regularizer for High Dimensional Noisy Data"
By Sergül Aydöre, Assistant Professor at Stevens Institute of Technology
Abstract:
The use of complex models –with many parameters– is challenging with high-dimensional small-sample
problems: indeed, they face rapid overfitting. Such situations are common when data collection is expensive,
as in neuroscience, biology, or geology. Dedicated regularization can be crafted to tame overfit, typically via
structured penalties. But rich penalties require mathematical expertise and entail large computational costs.
Stochastic regularizers such as dropout are easier to implement: they prevent overfitting by random perturbations.
Used inside a stochastic optimizer, they come with little additional cost. We propose a structured stochastic
regularization that relies on feature grouping. Using a fast clustering algorithm, we define a family of
groups of features that capture feature covariations. We then randomly select these groups inside a stochastic
gradient descent loop. This procedure acts as a structured regularizer for high-dimensional correlated data
without additional computational cost and it has a denoising effect. We demonstrate the performance of our
approach for logistic regression both on a sample-limited face image dataset with varying additive noise and on
a typical high-dimensional learning problem, brain image classification.
Extreme weather events pose great potential risk on ecosystem, infrastructure and human health. Analyzing extreme weather in the observed record (satellite, reanalysis products) and characterizing changes in extremes in simulations of future climate regimes is an important task. Thus far, extreme weather events have been typically specified by the community through hand-coded, multi-variate threshold conditions. Such criteria are usually subjective, and often there is little agreement in the community on the specific algorithm that should be used. We propose the use of a different approach: machine learning (and in particular deep learning) for solving this important problem. If human experts can provide spatio-temporal patches of a climate dataset, and associated labels, we can turn to a machine learning system to learn the underlying feature representation. The trained Machine Learning (ML) system can then be applied to novel datasets, thereby automating the pattern detection step. Summary statistics, such as location, intensity and frequency of such events can be easily computed as a post-process.
We will report compelling results from our investigations of Deep Learning for the tasks of classifying tropical cyclones, atmospheric rivers and weather front events. For all of these events, we observe 90-99% classification accuracy. We will also report on progress in localizing such events: namely drawing a bounding box (of the correct size and scale) around the weather pattern of interest. Both tasks currently utilize multi-layer convolutional networks in conjunction with hyper-parameter optimization. We utilize HPC systems at NERSC to perform the optimization across multiple nodes, and utilize highly-tuned libraries to utilize multiple cores on a single node. We will conclude with thoughts on the frontier of Deep Learning and the role of humans (vis-a-vis AI) in the scientific discovery process.
Predictive Model and Record Description with Segmented Sensitivity Analysis (...Greg Makowski
Describing a predictive data mining model can provide a competitive advantage for solving business problems with a model. The SSA approach can also provide reasons for the forecast for each record. This can help drive investigations into fields and interactions during a data mining project, as well as identifying "data drift" between the original training data, and the current scoring data. I am working on open source version of SSA, first in R.
Word embeddings are common for NLP tasks, but embeddings can also be used to learn relations among categorical data. Deep learning can be useful also for structured data, and entity embeddings is one reason why it makes sense. These are slides from a seminar held in Sbanken.
Statistical estimation and inference for large data sets require computationally efficient optimization methods. Remote sensing retrievals are, in fact, estimates of the underlying true state, and their optimization routines must necessarily make compromises in order to keep up with large data volumes. A sub-group of the Remote Sensing Working Group of the SAMSI Program on Mathematical and Statistical Methods for Climate and the Earth System is investigating how optimization in Bayesian-inspired retrievals and o_-line statistical methods could be made more computationally efficient. We will report on discussions held to-date and describe how progress in the theory of data systems research can positively impact optimization methodologies.
LEGaTO: Low-Energy Heterogeneous Computing Use of AI in the projectLEGATO project
Presentation by Osman Unsal and Pirah Noor Soomro at the webinar AI4EU WebCafé: 'Energy-efficient AI, a perspective from the LEGaTO project' on 28 October 2020
Similar to Efficiency gains in inversion based interpretation through computer (20)
Deep Behavioral Phenotyping in Systems Neuroscience for Functional Atlasing a...Ana Luísa Pinho
Functional Magnetic Resonance Imaging (fMRI) provides means to characterize brain activations in response to behavior. However, cognitive neuroscience has been limited to group-level effects referring to the performance of specific tasks. To obtain the functional profile of elementary cognitive mechanisms, the combination of brain responses to many tasks is required. Yet, to date, both structural atlases and parcellation-based activations do not fully account for cognitive function and still present several limitations. Further, they do not adapt overall to individual characteristics. In this talk, I will give an account of deep-behavioral phenotyping strategies, namely data-driven methods in large task-fMRI datasets, to optimize functional brain-data collection and improve inference of effects-of-interest related to mental processes. Key to this approach is the employment of fast multi-functional paradigms rich on features that can be well parametrized and, consequently, facilitate the creation of psycho-physiological constructs to be modelled with imaging data. Particular emphasis will be given to music stimuli when studying high-order cognitive mechanisms, due to their ecological nature and quality to enable complex behavior compounded by discrete entities. I will also discuss how deep-behavioral phenotyping and individualized models applied to neuroimaging data can better account for the subject-specific organization of domain-general cognitive systems in the human brain. Finally, the accumulation of functional brain signatures brings the possibility to clarify relationships among tasks and create a univocal link between brain systems and mental functions through: (1) the development of ontologies proposing an organization of cognitive processes; and (2) brain-network taxonomies describing functional specialization. To this end, tools to improve commensurability in cognitive science are necessary, such as public repositories, ontology-based platforms and automated meta-analysis tools. I will thus discuss some brain-atlasing resources currently under development, and their applicability in cognitive as well as clinical neuroscience.
Professional air quality monitoring systems provide immediate, on-site data for analysis, compliance, and decision-making.
Monitor common gases, weather parameters, particulates.
Comparing Evolved Extractive Text Summary Scores of Bidirectional Encoder Rep...University of Maribor
Slides from:
11th International Conference on Electrical, Electronics and Computer Engineering (IcETRAN), Niš, 3-6 June 2024
Track: Artificial Intelligence
https://www.etran.rs/2024/en/home-english/
DERIVATION OF MODIFIED BERNOULLI EQUATION WITH VISCOUS EFFECTS AND TERMINAL V...Wasswaderrick3
In this book, we use conservation of energy techniques on a fluid element to derive the Modified Bernoulli equation of flow with viscous or friction effects. We derive the general equation of flow/ velocity and then from this we derive the Pouiselle flow equation, the transition flow equation and the turbulent flow equation. In the situations where there are no viscous effects , the equation reduces to the Bernoulli equation. From experimental results, we are able to include other terms in the Bernoulli equation. We also look at cases where pressure gradients exist. We use the Modified Bernoulli equation to derive equations of flow rate for pipes of different cross sectional areas connected together. We also extend our techniques of energy conservation to a sphere falling in a viscous medium under the effect of gravity. We demonstrate Stokes equation of terminal velocity and turbulent flow equation. We look at a way of calculating the time taken for a body to fall in a viscous medium. We also look at the general equation of terminal velocity.
Toxic effects of heavy metals : Lead and Arsenicsanjana502982
Heavy metals are naturally occuring metallic chemical elements that have relatively high density, and are toxic at even low concentrations. All toxic metals are termed as heavy metals irrespective of their atomic mass and density, eg. arsenic, lead, mercury, cadmium, thallium, chromium, etc.
What is greenhouse gasses and how many gasses are there to affect the Earth.moosaasad1975
What are greenhouse gasses how they affect the earth and its environment what is the future of the environment and earth how the weather and the climate effects.
Richard's aventures in two entangled wonderlandsRichard Gill
Since the loophole-free Bell experiments of 2020 and the Nobel prizes in physics of 2022, critics of Bell's work have retreated to the fortress of super-determinism. Now, super-determinism is a derogatory word - it just means "determinism". Palmer, Hance and Hossenfelder argue that quantum mechanics and determinism are not incompatible, using a sophisticated mathematical construction based on a subtle thinning of allowed states and measurements in quantum mechanics, such that what is left appears to make Bell's argument fail, without altering the empirical predictions of quantum mechanics. I think however that it is a smoke screen, and the slogan "lost in math" comes to my mind. I will discuss some other recent disproofs of Bell's theorem using the language of causality based on causal graphs. Causal thinking is also central to law and justice. I will mention surprising connections to my work on serial killer nurse cases, in particular the Dutch case of Lucia de Berk and the current UK case of Lucy Letby.
Salas, V. (2024) "John of St. Thomas (Poinsot) on the Science of Sacred Theol...Studia Poinsotiana
I Introduction
II Subalternation and Theology
III Theology and Dogmatic Declarations
IV The Mixed Principles of Theology
V Virtual Revelation: The Unity of Theology
VI Theology as a Natural Science
VII Theology’s Certitude
VIII Conclusion
Notes
Bibliography
All the contents are fully attributable to the author, Doctor Victor Salas. Should you wish to get this text republished, get in touch with the author or the editorial committee of the Studia Poinsotiana. Insofar as possible, we will be happy to broker your contact.
Lateral Ventricles.pdf very easy good diagrams comprehensive
Efficiency gains in inversion based interpretation through computer
1. Efficiency gains in inversion based interpretation
through computer driven classification
Dustin T. Dewett
2. Outline
Part One: Background
• SOM: A Brief Explaination
• Inversion: Relevant Background (very brief)
Part Two: SOM and Inversion
• Concept of SOM Classification of Inversion
• Real Data Example (brief)
Part Three: Questions and Wrap-up
20170531
Efficiency Gains
2
3. SOM: Briefly Explained
Suppose we have a table of data…
• Let the columns be different attributes
• Let the rows be what we want to classify
• We can classify each object by the
attributes that are ascribed to them.
• The key parameters are the number of
neurons, the maximum number of epochs,
and the learning rate.
20170531
Efficiency Gains
3
1 2 3 4 5
OBJ1 5 3 3 7 9
OBJ2 6 3 6 8 6
OBJ3 9 2 6 9 4
OBJ4 1 4 9 6 4
OBJ5 1 2 4 6 2
OBJ6 2 8 4 5 6
OBJ7 3 5 5 3 1
OBJ8 4 8 1 2 2
OBJ9 5 6 1 2 6
Attributes
4. SOM: Briefly Explained
• The system (SOM) will map the input
attributes onto an output space,
• Generate an initial state (e.g. pseudo
random, true random, or predefined)
• Then iterate the following:
• Select training data at random
• Compute the winning neuron
• Update all neurons
• Stop when input is classified or maximum
epochs has been reached
• In general, the neurons are interconnected
in a defined topology.
20170531
Efficiency Gains
4
6. Relevant Inversion Background
20170531
Efficiency Gains
6
Seismic data
Well data
Seismic
Inversion
Geophysical
Properties
(AI, SI, ρ)
Rock Physics
Model
Geological
Properties (fluid,
lithology, etc.)
Well data
7. Relative vs Absolute Inversion
Relative
• Poor fidelity over long time intervals (a
function of the frequency content of the
data)
• Scaling of the output values is only
proportional to the real values
• Suffers more from tuning effects
• Easy to compute
• No initial model dependence
Absolute
• High fidelity over long time intervals
• Correct (or more correct) output scaling
• Suffers far less from tuning effects
• Difficult to compute (both in computational
time and knowledge)
• Dependent on initial model
20170531
Efficiency Gains
7
8. A Note on Coloured Inversion
Coloured Inversion
• Tends to outperform other relative inversion methods (e.g. Runsum or Recursive Inversion)
• Attempts spectrum flattening through wavelet estimation by smoothing the seismic spectrum
• Matches the output seismic impedance spectrum to the well impedance spectrum
• Quick and cheap,
• Simple and Robust,
• No background model,
• No wavelet derivation
• Well data are only used for spectral matching
20170531
Efficiency Gains
8
9. Outline
Part One: Background
• SOM: A Brief Explanation
• Inversion: Relevant Background (very brief)
Part Two: SOM and Inversion
• Concept of SOM Classification of Inversion
• Real Data Example (brief)
Part Three: Questions and Wrap-up
20170531
Efficiency Gains
9
10. Goal: Empower a non-specialist to leverage
inversion results in an effective way.
20170531
Efficiency Gains
10
11. Relative Seismic Inversion Results*
20170531
Efficiency Gains
11
AI GI
*results courtesy of R. Meza
• Given multi-volume results from seismic inversion, typically specialists classify the results based on their
prior knowledge and experience combined with an understanding of the rock physics.
• However, specialists may be limited in time and more so in supply.
• Therefore, if a more unbiased and repeatable approach can be used to classify the inversion response,
efficiency can be gained through specialist engagement at key times with specific areas of interest.
12. Relative Seismic Inversion Results*
20170531
Efficiency Gains
12
AI GI
*results courtesy of R. Meza
• Given multi-volume results from seismic inversion, typically specialists classify the results based on their
prior knowledge and experience combined with an understanding of the rock physics.
• However, specialists may be limited in time and more so in supply.
• Therefore, if a more unbiased and repeatable approach can be used to classify the inversion response,
efficiency can be gained through specialist engagement at key times with specific areas of interest.
13. Workflow
• Begin with inversion products,
• Classify the results with a computer based
classification algorithm,
• Scan the results for anomalous
classifications (neurons),
• Focus the specialist’s engagement on
anomalous areas.
20170531
Efficiency Gains
13
14. Begin with Inversion Products
20170531
Efficiency Gains
14
AI GI
*results courtesy of R. Meza
16. But what are these classes really?
20170531
Efficiency Gains
16
AI
GI
(Modified from Whitcombe and Fletcher, 2001)
17. But what are these classes really?
20170531
Efficiency Gains
17
AI
GI
(Modified from Whitcombe and Fletcher, 2001)
It’s not quite this easy.
18. Maximum attribute 1
and minimum
attribute 2
But what are these classes really?
20170531
Efficiency Gains
18
Minimum attribute 1
and maximum
attribute 2
Maximum attribute 1
and maximum
attribute 2
Minimum attribute 1
and minimum
attribute 2
Data located
about the
middle point on
both attributes
19. 20170531
Efficiency Gains
19
Most anomalous class
(of 16). Determined after a scan of
the data one class at a time.
Using more classes can allow for
finer distinctions between different
data points.
23. Summary
By leveraging a self-organized map and inversion products together, it is possible
• to more rapidly localize interpretation in a dataset while leveraging a higher degree of geologic
understanding typically found in prospect interpreters (who commonly lack in-depth QI
knowledge).
• to make potentially substantial gains in both efficiency and interpretation quality
• to increase the engagement of non-specialists.
But always remember to:
• engage a specialist to fully understand the physics behind the SOM classified anomalies.
20170531
Efficiency Gains
23
24. Acknowledgements
R. Meza for providing a suitable relative inversion data set.
M. Florez for feedback and critique.
F. Hilterman for encouraging publication.
BHP Billiton for encouraging the work and providing needed resources for completion.
20170531
Efficiency Gains
24
25. References
Whitcombe D. and John G. Fletcher (2001) The AIGI crossplot as an aid to AVO analysis and calibration. SEG Technical Program Expanded
Abstracts 2001: pp. 219-222.
Connolly, P.A., 1999, Elastic impedance: The Leading Edge, 18, 438-452, doi: 10.1190/1.1438307.
Kohonen, T., 1989, Self-organization and associative memory, 3rd Edition, Springer, New York.
20170531
Efficiency Gains
25
Good afternoon, my name is Dustin Dewett, and I am both a geophysicist with BHP Billiton and a Ph.D. student at the University of Oklahoma.
Today, I will be specifically talking about using relative inversion products and self-organizing maps together. However, the general principle applies to both model based and relative inversion as well as other methods of computer based classification.
Briefly, I will begin with an explanation of what a SOM is and how it works.
Then, I will briefly touch on some seismic inversion background that is specific to the products that I used in this work.
In part two, I will go over the concept of using a SOM together with inversion products and how it can increase efficiency. The efficiency that I am specifically referring to is related to idle time when waiting on a specialist or increased quality of interpretation through better engagement of a general interpreter.
As I go through this section there will be a real data example where this technique was applied.
If we are given an arbitrary set of data, where each data object is a function of several data attributes, then we can apply a classification scheme to those data.
As the number of attributes becomes larger, it becomes significantly more efficient to leverage computer based schemed to automate this process.
With respect to a SOM, there are several key parameters that affect the result: the number of neurons (or how many classes you will end up with), the maximum number of epochs (or how many iterations the SOM will run), and the learning rate (or the distance that a neuron can move for a given epoch).
In a simple picture, we begin with input data and a set of starting parameters (discussed is in last slide). We also will need a starting topology and how to seed the neuron (to determine how the neurons are placed).
In this example, I have a linear topology with pseudo random seed values. I then randomly choose a data point from the input set and move the closest neuron toward that point. Notice that the neurons are linked together.
After a number of iterations, a solution will converge and individual neuron movement will be small.
So how neurons are placed, where they are placed, and the key parameters discussed previously are important to the final result.
Often SOMs are compared to PCA (principle component analysis). Where PCA is linear, however, SOMs are very non-linear.
Here are two movies where the input data are arranged in letters placed horizontally.
Here is the basic idea behind inversion. There are several points where a specialist would need to be engaged.