Self-organizing maps (SOMs) can be used to classify seismic attributes in an unsupervised manner and reveal geological patterns that improve seismic interpretation. SOMs reduce a large set of seismic attributes into a smaller set of clusters that relate to geologic features of interest. The classified seismic data can then be analyzed using computer vision algorithms like convolutional neural networks to automatically identify depositional sequences, seismic facies, play types, leads, and prospects. This facilitates a more robust and timely seismic interpretation that can help identify hydrocarbon traps and reduce exploration risk and costs.
Optimisation of Parameters in Seismic data acquisition for better subsurface ...QUESTJOURNAL
ABSTRACT: Well-designed and optimised seismic data acquisition geometry illuminates the subsurface. While designing the geometry we have to think about the available resources for data acquisition and also implementation difficulties in the working area and most importantly technical requirements of Interpreter should be strictly considered. The main objective of a better survey design is to image subsurface structures. Generally, the data acquisition parameters are selected and designed by considering simplified subsurface model. In this work the real subsurface model of the area is constructed in the laboratory with the help of a new state of art software NORSAR and the data was artificially acquired. Several acquisition geometries were used for acquiring data in laboratory conditions and a final geometry is selected with optimised parameters resulted in the proper illumination of subsurface structures up to the basement. The illumination maps generated in laboratory play key role for finalising acquisition parameters and results are well matching with the real results after seismic data processing.
Optimisation of Parameters in Seismic data acquisition for better subsurface ...QUESTJOURNAL
ABSTRACT: Well-designed and optimised seismic data acquisition geometry illuminates the subsurface. While designing the geometry we have to think about the available resources for data acquisition and also implementation difficulties in the working area and most importantly technical requirements of Interpreter should be strictly considered. The main objective of a better survey design is to image subsurface structures. Generally, the data acquisition parameters are selected and designed by considering simplified subsurface model. In this work the real subsurface model of the area is constructed in the laboratory with the help of a new state of art software NORSAR and the data was artificially acquired. Several acquisition geometries were used for acquiring data in laboratory conditions and a final geometry is selected with optimised parameters resulted in the proper illumination of subsurface structures up to the basement. The illumination maps generated in laboratory play key role for finalising acquisition parameters and results are well matching with the real results after seismic data processing.
A Review of Change Detection Techniques of LandCover Using Remote Sensing Dataiosrjce
IOSR Journal of Computer Engineering (IOSR-JCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of computer engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in computer technology. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
A Study of Non-Gaussian Error Volumes and Nonlinear Uncertainty Propagation f...Justin Spurbeck
The ever-growing resident space object population poses a continual threat in that a hyper velocity impact is likely to be catastrophic to an active satellite. To avoid these scenarios, space operators compute a probability of collision metric for each potential conjunction. Uncertainty trends are studied in the conjunction plane and operational decisions to mitigate any high-risk situations are made based off this information. There are many methods of uncertainty propagation and probability of collision formulations and knowledge of their realism is required to maintain a sustainable space environment. Thus, this research studies the effect of Chan, Alfano, Foster, Gaussian mixture, and Monte Carlo probability of collision calculations and their correlation to uncertainty realism metrics. The linear, unscented transform, entropy-based, and Monte Carlo propagation techniques are utilized alongside the collision calculations and it is shown that there are important correlations any space operator should be aware of to support maintenance of a healthy spacecraft.
ECHOES & DELPH Seismic - Advances in geophysical sensor data acquisitionIXSEA-DELPH
This presentation will explain what is different and new about the ECHOES products and introduce a new approach to Sub-Bottom Profiler (SBP) data acquisition and processing which has the potential to make a real difference to workflow.
A Review of Change Detection Techniques of LandCover Using Remote Sensing Dataiosrjce
IOSR Journal of Computer Engineering (IOSR-JCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of computer engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in computer technology. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
A Study of Non-Gaussian Error Volumes and Nonlinear Uncertainty Propagation f...Justin Spurbeck
The ever-growing resident space object population poses a continual threat in that a hyper velocity impact is likely to be catastrophic to an active satellite. To avoid these scenarios, space operators compute a probability of collision metric for each potential conjunction. Uncertainty trends are studied in the conjunction plane and operational decisions to mitigate any high-risk situations are made based off this information. There are many methods of uncertainty propagation and probability of collision formulations and knowledge of their realism is required to maintain a sustainable space environment. Thus, this research studies the effect of Chan, Alfano, Foster, Gaussian mixture, and Monte Carlo probability of collision calculations and their correlation to uncertainty realism metrics. The linear, unscented transform, entropy-based, and Monte Carlo propagation techniques are utilized alongside the collision calculations and it is shown that there are important correlations any space operator should be aware of to support maintenance of a healthy spacecraft.
ECHOES & DELPH Seismic - Advances in geophysical sensor data acquisitionIXSEA-DELPH
This presentation will explain what is different and new about the ECHOES products and introduce a new approach to Sub-Bottom Profiler (SBP) data acquisition and processing which has the potential to make a real difference to workflow.
Interpretation and recognition of depositional systems using seismic dataDiego Timoteo
ABSTRACT
The interpretation and recognition of Depositional Systems using seismic data require a strong knowledge in stratigraphy, structural geology, tectonics, biostratigraphy, sedimentology and geophysics; even when a geoscientist doesn’t be a specialist of one of these. The mentioned disciplines interact and complement each other in different stages of study and exploration of hydrocarbon basins. Five stages have been proposed and studied in Interpreting Depositional Systems. (1) Review of basic concepts used in the definition of Depositional Sequences and Systems Tracts within the context of sequence stratigraphy. (2) The deepening in the physical foundations of rocks, that allows to obtain images of the subsurface through the application of seismic reflection method. It also is indicated how to tie the seismic data with well data through the synthetic seismogram. (3) The seismic stratigraphic interpretation, describes how Depositional Sequences and their Systems Tracts are interpreted in the well and seismic data. (4) The recognition of Depositional Systems, describes how the seismic facies analysis is more accurate on the interpretation, because of the association of particular Systems Tracts with particular deposition processes. The Depositional Sequences and Systems Tracts have predictable stratal patterns and lithofacies; thus, they provide a new way to establish a chronostratigraphic correlation framework based on physical criteria. (5) The advanced seismic interpretation allows geoscientists extract more information from seismic data and their applications include hydrocarbon play evaluation, prospect identification, risk analysis and reservoir characterization.
Keywords: depositional systems, seismic stratigraphy, sequence stratigraphy, seismic sequence, seismic facies, potential reservoir rocks.
Seismic interpretation is a critical step in evaluating the subsurface, as it turns the large investments in seismic data to tangible value to base all E&P decisions. It is essential for explorationists to learn to execute this key skill in their career.
During the past decade, the size of 3D seismic data volumes and the number of seismic attributes have increased
to the extent that it is difficult, if not impossible, for interpreters to examine every seismic line and time
slice. To address this problem, several seismic facies classification algorithms including k-means, self-organizing
maps, generative topographic mapping, support vector machines, Gaussian mixture models, and artificial neural
networks have been successfully used to extract features of geologic interest from multiple volumes. Although
well documented in the literature, the terminology and complexity of these algorithms may bewilder the average
seismic interpreter, and few papers have applied these competing methods to the same data volume. We have
reviewed six commonly used algorithms and applied them to a single 3D seismic data volume acquired over the
Canterbury Basin, offshore New Zealand, where one of the main objectives was to differentiate the architectural
elements of a turbidite system. Not surprisingly, the most important parameter in this analysis was the choice of
the correct input attributes, which in turn depended on careful pattern recognition by the interpreter. We found
that supervised learning methods provided accurate estimates of the desired seismic facies, whereas unsupervised
learning methods also highlighted features that might otherwise be overlooked.
We illustrate unsupervised and supervised learning algorithms that accurately classify the lithological variations in the 3D seismic data. We demonstrate blind source separation techniques such as the principal components (PCA) and noise adjusted principal
components in conjunction with Kohonen Self organizing maps to produce superior unsupervised classification maps.
Further, we utilize the PCA space training in Maximum likelihood (ML) supervised classification. Results demonstrate that the ML supervised classification produces an improved classification of the facies in the 3D seismic dataset from the Anadarko basin in central Oklahoma.
The International Journal of Engineering & Science is aimed at providing a platform for researchers, engineers, scientists, or educators to publish their original research results, to exchange new ideas, to disseminate information in innovative designs, engineering experiences and technological skills. It is also the Journal's objective to promote engineering and technology education. All papers submitted to the Journal will be blind peer-reviewed. Only original articles will be published.
Sensitivity of Support Vector Machine Classification to Various Training Feat...Nooria Sukmaningtyas
Remote sensing image classification is one of the most important techniques in image
interpretation, which can be used for environmental monitoring, evaluation and prediction. Many algorithms
have been developed for image classification in the literature. Support vector machine (SVM) is a kind of
supervised classification that has been widely used recently. The classification accuracy produced by SVM
may show variation depending on the choice of training features. In this paper, SVM was used for land
cover classification using Quickbird images. Spectral and textural features were extracted for the
classification and the results were analyzed thoroughly. Results showed that the number of features
employed in SVM was not the more the better. Different features are suitable for different type of land
cover extraction. This study verifies the effectiveness and robustness of SVM in the classification of high
spatial resolution remote sensing images.
SEGMENTATION USING ‘NEW’ TEXTURE FEATUREacijjournal
Color, texture, shape and luminance are the prominent features for image segmentation. Texture is an
organized group of spatial repetitive arrangements in an image and it is a vital attribute in many image
processing and computer vision applications. The objective of this work is to segment the texture sub
images from the given arbitrary image. The main contribution of this work is to introduce “NEW” texture
feature descriptor to the image segmentation field. The NEW texture descriptor labels the neighborhood
pixels of a pixel in an image as N,W,NW,NE,WW,NN and NNE(N-North, W-West).To find the prediction
value, the gradient of the intensity functions are calculated. Eight component binary vectors are formed
and compared to prediction value. Finally end up with 256 possible vectors. Fuzzy c-means clustering is
used to segment the similar regions in textural image Extensive experimentation shows that the proposed
methodology works better for segmenting the texture images, and also segmentation performance are
evaluated.
A comparative study of dimension reduction methods combined with wavelet tran...ijcsit
In this paper, we present a comparative study of dimension reduction methods combined with wavelet
transform. This study is carried out for mammographic image classification. It is performed in three stages:
extraction of features characterizing the tissue areas then a dimension reduction was achieved by four
different methods of discrimination and finally the classification phase was carried. We have late compared
the performance of two classifiers KNN and decision tree.
Results show the classification accuracy in some cases has reached 100%. We also found that generally the
classification accuracy increases with the dimension but stabilizes after a certain value which is
approximately d=60.
Q UANTUM C LUSTERING -B ASED F EATURE SUBSET S ELECTION FOR MAMMOGRAPHIC I...ijcsit
In this paper, we present an algorithm for feature selection. This algorithm labeled QC-FS: Quantum
Clustering for Feature Selection performs the selection in two steps. Partitioning the original features
space in order to group similar features is performed using the Quantum Clustering algorithm. Then the
selection of a representative for each cluster is carried out. It uses similarity measures such as correlation
coefficient (CC) and the mutual information (MI). The feature which maximizes this information is chosen
by the algorithm
Fusion of Multispectral And Full Polarimetric SAR Images In NSST DomainCSCJournals
Polarimetric SAR (POLSAR) and multispectral images provide different characteristics of the imaged objects. Multispectral provides information about surface material while POLSAR provides information about geometrical and physical properties of the objects. Merging both should resolve many of object recognition problems that exist when they are used separately. Through this paper, we propose a new scheme for image fusion of full polarization radar image (POLSAR) with multispectral optical satellite image (Egyptsat). The proposed scheme is based on Non-Subsampled Shearlet Transform (NSST) and multi-channel Pulse Coupled Neural Network (m-PCNN). We use NSST to decompose images into low frequency and band-pass sub- band coefficients. With respect to low frequency coefficients, a fusion rule is proposed based on local energy and dispersion index. In respect of sub-band coefficients, m-PCNN is used to guide how the fused sub-band coefficients are calculated using image textural information.
The proposed method is applied on three batches of Egyptsat (Red-Green-infra-red) and radarsat2 (C-band full-polarimetric HH-HV and VV-polarization) images. The batches are selected to react differently with different polarization. Visual assessment of the obtained fused image gives excellent information on clarity and delineation of different objects. Quantitative evaluations show the proposed method can superior the other data fusion methods.
Distance Metric Based Multi-Attribute Seismic Facies Classification to Identi...Pioneer Natural Resources
Conventional reservoirs benefit from a long scientific history that correlates successful plays to seismic measurements through depositional, tectonic, and digenetic models. Unconventional reservoirs are less well understood, however benefit from significantly denser well control. Thus, allowing us to establish statistical rather than model-based correlations between seismic data, geology, and successful completion strategies. One of the more commonly encountered correlation techniques is based on computer assisted pattern recognition. The pattern recognition techniques have found their niche in a plethora of applications ranging from flagging suspicious credit card purchase patterns to rewarding repeating online buying patterns. Classification of a given seismic response as having a “good” or “bad” pattern requires a “distance metric”. Distance metric “learning” uses past experiences (well performance) as training data to develop a distance metric. Alternative distance metrics have demonstrated significant value in the identification and classification of repeated or anomalous behaviors in public health, security, and marketing. In this paper we examine the value of three of these alternative distance metrics of 3D seismic attributes to the identification of sweet spots in a Barnett Shale play.
Större behov för att effektivt rekrytera
företag och arbetssökande.
Vad menar jag med denna titel? Rekrytera företag hörs ju inte speciellt logiskt ut när vi läser dagligen att företag stänger dörrarna och arbetstagare måste lämna arbetsplatsen, antigen temporärt eller permanent. Det är tyvärr i dessa tider också företag och offentlig verksamhet som sliter med att få den hjälp som dem behöver. Vi har en utmaning, att vi inte klarar matcha arbetslösa med arbetsbehov i nuläget och nära framtid.
Det är ett skriande behov av arbetskraft i kommunerna och regionerna i Sverige i dagsläget.
Behov finns redan
Det finns redan och kommer att finns behov för många tillfälliga jobb i kommuner och regioner när den ordinarie personalen behöver avlastning eller blir sjukskriven. Vi har inte nått kulmen i Corona pandemin, så situationen väntas bli värre.
Kommer coronavirus att störa din jobbsökning?
Arbetssökning är stressande även under de bästa omständigheterna. Den pågående osäkerheten relaterad till coronavirus-pandemin kan göra att utsikterna för att leta efter arbete känns nästan hopplösa, men jobbexperter säger att du borde ge upp.
Ge barnen i Ukraina och Sverige livslånga minnen genom att dela erfarenheter inom kultur och hockey med varandra genom att möts inom hockey.
Genom spelet och vänskap lär barn sig livslektioner som kommer att fortsätta i alla utmaningar nu och senare i livet.
Fritid en essentiell del av flerårsplan 2019-2021 till Falköping kommun. Samverkan och hållbarhet är ord som återkommer rätt så frekventa i denna plan och verkar ha kopplingar till alla verksamhetsområden och planer som dessa har. Alla prioriterade målområden inom flerårsplanen har element som Fritid kan agera inom och ta sin del av utvecklingen och uppfyllande av dessa målbilden.
Trendanalys för att se hur stor intresse det är för Hockeylag inom SHL. En startpunkt för vidare analys som kan vara med att förstärka klubbarnas profil i marknaden och därvid samarbetspartners varumärke. Den sociala profil till klubbarna börjar bli mer och mer viktigt för lagen. Samhällsansvar och deltagande i samhället inom olika områden reflekteras igenom hur klubbarnas namn kändes igen i marknaden och hos folk flest. Det finns många möjligheter, och i denna presentationen har vi gjord ett enkelt försök för att visa att klubbarna är exponerat olika i marknaden under 2017.
En kundresa i det svenska informationssamhället i 2017 baserad på ISS rapporten som heter "Svenskarna och internet 2017". Arbetslöshet och utbildningsnivå spelar en roll i hur många som använder internet, och hur dem använder detta. Det blir allt mer viktig att förstå människors beteende på nätet, speciellt för dem som vill digitalisera sina tjänster och produkt och erbjuda dessa till det svenska samhällets invånare. Allt fler är uppkopplade till internet hemma eller på mobila enheter. Allt fler känner att dem inte tar del i utvecklingen, och det är utbildningsnivå och status i samhället som i stor grad är faktorer till skillnad i använding och hur internet används. Inte minst är det viktigt för hur människor kan påverkas av information igenom att dem har olika trösklar för källkritik.
Vi har kunder med stark digital definition makt, som utmanar föråldrade affärsmodeller och ledningsmetoder. Behov för beteendekunskap och övertygande teknologi och metoder har blivit större en någon gång. Frågan är, vart börjar du, och är det redan för sent? Vad behövs för att lyckas i den nya generation med kunder som redan är här, och som kommer dem nästa 5-15 åren?
Mining industry has developed through various cyvles dependant upon industry conditions. During tougher economic periods the industry has tendency looking for ways to make mining more efficiient and less risk associated with its activities. Exploration activity has always been a long term activity with a timespand oc decades instead of years or even months which the normal operations operate under. This require this part of mining industry to become more innovative and create more accurate prognosis for its discoveries.
Rätt kombination av konstig intelligens på datorn och rätta metoder och teknik kan ge oss en bättre chans att matcha dem som står långt i från arbetsmarknaden. Det är en utmanande uppgift som vi har redan i dag, och som blir större för varje dag som går. Datorn blir smartare igenom programvara och kapacitet, och på samma tid har vi algoritmer som inte fanns bara för 5-10 år sedan som kan vara till hjälp. Företag behöver arbetskraft och tudelningen av arbetsmarknaden har aldrig varit större en i dag, och något måste görs. Jag föreslår en programvara som jag kallar Matchning Arbetssökande Långt Från Arbetsmarknaden - MALFA. Et program som använder tekniker och metoder i samverkan med datorns kraft och algoritm hantering, och som ger oss möjlighet att se vilka möjligheter personer som till exempel har funktionshinder kan hitta en arbetsgivare som vill ha god nytta av denna resursen.
Reaching new frontiers within seismic interpretation and analysis by utilizing deep learning computer vision technology.
VAP4U has worked for a long period how to utilize this technology on normal standard seismic data (SEG/Y or other vendor formats) and propose how to use it through the software s AI s.
Convolutional Neural Networks (CNN) and other machine learning algorithms and methods enables us to make the computer to identify and learn to identify more features in the seismic data than previously done.
Training the computer with the help of some initial input which is done once. Then let the algorithms within limitations and boundaries set by the experts do its work identifying features of interest within the data. Looking for playmodels, lead types or prospects, then let the computer take the grunt of the workload and spend time on more important analysis such as risk and ranking of plays.
Searching the all-time growing amount of global data and research results and retrieving only the relevant and up-to date information becomes more and more challenging. The amount of data including the big data issue in the IoT world makes it even more challenging. How can an employee keeping himself up to date and include the relevant information into his work and ensure his work includes the most relevant and latest information. Most search engines today provide some sort of semantic based answers to the queries you enter into the system. However, most search engines do not know you well enough to provide you with the best answers based on who you are, and what you really want for an answer. Here is today's challenge combined with the growing amount of data and media you find it in. The answer might be closer than you think.
En typ av arbetssökande är den vi kallar för den passiva arbetssökande. Nu är frågan - vem kan tillåta sig att vara passiv, sökaren eller företaget? Svaret är kanske bara sökaren, medan företaget inte kan tillåta sig att vara passiv i rekryteringsarbetet, eller sökandet efter resurser till arbetsplatsen sin.
Vem är den passiva arbetssökande och hur agerar den i dagens och framtidens arbetsmarknad? Denna frågan är kanske den viktigaste och vi försöker att ge våran kommentar i detta bildspelet.
Matchning i arbetsmarknaden har allt blivit hårdare sedan det är svårt för företag att hitta personer med rätt kompetens och större andel av arbetssökare står långt från arbetsmarknaden av olika orsaker. Användning av teknologi som stöd i matchningen har varit gjord dem senaste 10 åren av olika aktörer med olika resultat och bredd. I detta bildspelet försöker jag göra redo för olika algoritmer och metoder som om dem blir integrerad på rätt sätt och använd på rätt sätt, ge båda den arbetssökande och inte minst företagaren mycket större chans att matcha behöv för båda med större grad av succés en man gör i dag.
Det visar sig att arbetsmarknaden sliter med att få parter till att hitta varandra, eller som man säger - matcha med varandra.
Företagen söker personal, medan arbetssökaren söker jobb, men trots detta, så hittar dem inte varandra. Det är många orsaker till att vi har denna situationen, som bland annat saknar rätt kompetens till rätt tid och plats. Andra faktorer kan vara att jobbsökaren inte hittar rätt jobb sedan den inte är lätt att hitta helt enkelt. En platsannons i dag är kanske inte god nog, eller förklarar egentligen inte vad företaget behöver, och jobbsökaren är kanske inte bra nog på att berätta vad den kan eller vill. Jobbsökandet kommer att bli ett av dem viktigaste sökmotorer som vi behöver i framtiden, som redan är här nu!
Rätt jobb vid rätt tidpunkt - så kallad matchning av arbetskraft och behov inom arbetsmarknaden. En utmaning som alltid har varit svår att komma över. Men, nu börjar utvecklingen av verktyg och metoder ta fart och Google har kommit med sin ATS lösning som i form av Google Jobs API ser ut till att vara en lovande teknologi till användning inom matchning av arbetssökande och företagen.
Arbetsförmedling på den svenska marknaden har utmaningar som kanske är unika i världssammanhang.
Stor andel av arbetskraft står långt från arbetsmarknaden, sociala systemet i sverige är unikt och skapar en marknadsfaktor som inte många andra land har inom arbetsmarknaden. Försök på att jämnföra sverige med andra lands arbetsmarknad kan vara mycket svårt och många gånger leda till fel slutsats av vad som behövs eller inte, och vem som borde göra vad.
Frågan som borde ställs är hellre, vad måste görs för att få till en bättre matchning och därvid lägre arbetslöshet speciellt bland dem som är lågutbildade och har ett funktionshinder?
Teknologi har många gånger varit svaret och det industriella samhället fick stor påverkan i arbetsmarknaden, och introduktionen av informations åldern har lett till nya ändringar i arbetsmarknaden som ger oss nya utmaningar på flera fronter.
Införande av konstig intelligens på datorn och användning av kognitiv teknologi blandad med avancerad matematik på stadigt kraftigare datorer ger oss möjligheter som aldrig förut.
Det är nu på tiden att se hur vi kan använda verktygen och metoder som vi har i dag, och som vi inte ens hade för 10 år sen, till att skapa ett paradigm byte inom rekrytering och förmedling av arbete.
Grundvatten av god kvalitet är viktigt för många i regionen, småhushåll, industri, jordbruk, gårdsbruk och liknande.
I och med införandet av nationella miljökvalitetsmål har grundvattnets roll i samhällsplaneringen lyfts fram.
Det finns ett miljökvalitetetsmål med ett fokus på att skydda grundvattnet, dels som en del i vattnets kretslopp, dels som resurs inom den nutida och kommande vattenförsörjningen.
Västra Götalands län har omväxlande geologiska förhållanden. Grundvattentillgångar i jord av betydelse återfinns oftast i större grusavlagringar. Inom stora delar av länet är grusavlagringar sparsamt förekommande. Det är därför väsentligt att dessa avlagringar sparas och skyddas för att kunna användas för grundvattenuttag. I berggrunden finns större grundvattentillgångar endast i den sedimentära berggrunden i delar av centrala Västergötland.
En rad myndigheter och andra organisationer kommer att vara viktiga aktörer. Det är viktigt att också allmänheten och näringslivet tar en aktiv del i arbetet.
Det är nödvändigt att tidigt i arbetet med miljökvalitetsmålen hitta bra arbetsformer, speciellt i ett stort län som Västra Götaland, vilket består av 49 kommuner. För att veta vilka åtgärder som skall sättas in, måste viss grundläggande kunskap sammanställas, främst från kommunerna.
Matcha jobbsökaren till rätta jobbet är alfa och omega för en fungerande arbetsmarknad.
Attrahera personer till att arbeta hos företagen och optimera hur dem kommer i kontakt med personer som kan vara av intresse för dem.
Skapa nya möjligheter för att behålla talanger inom företaget.
Den sökande måste alltid vara i fokus och ses på som en konsument på lik linje med internet användare på en bank, resebyrå eller en som vill hyra en bil eller stuga.
Företag startar sin affär med jobb sök så rekryteringsverktyget måste vara effektivt för företaget som söker talanger.
Vi måste skapa en bättra jobb sök upplevelse både för företag och jobb sökare.
Anställningen är den viktigaste uppgift som företaget gör!
Ett problem som rekryterare och jobb sökare är alltför bekanta med:
Dåligt skrivna jobb beskrivningar som gör lite för att rikta rätt talang till rätt öppna positioner.
För att göra det ännu värre är många resymé är på samma sätt icke informativa, vilket gör det lika utmanande för arbetsgivarna att hitta rätt kandidater genom enkla sökord.
Google gör någonting med denna dubbelriktade röra.
Vilken riktning tar rekryteringen i närmaste framtid?
SOM-VIS
1. Self-Train Seismic data
to revealYour traps
Smaller harder to get targets imply more targets require
replacing traditional time and cost consuming
play, lead and prospect mapping methods
Lower the risk and make cost savings - Big time
2. CHALLENGE OF
SEISMIC DATA INTERPRETATION
3D replaces 2D domain seismic interpretation in a larger degree than before.
More data types creates data and attribute overload, not possible to manually screen
properly with high degree of confidence and reliability.
Multiple surveys over same areas require governance and comparison/ calibration
which is time-consuming and full of potential traps.
4D domain seismic interpretation introduces a full suite of new parameters to take
into consideration.
Amount of attributes and lack of clarity in their inter-dependencies and importance
to describe the geology or reservoirs have become overwhelming.
3. Importance of Seismic Attribute
Seismic attributes are any measurable property of seismic data. In turn, these
attributes are input to self-organizing-map (SOM) training. Efforts distilling
numerous seismic attributes into volumes that are easily evaluated for their geologic
significance and improved seismic interpretation. Commonly used categories of
seismic attributes include instantaneous, geometric, amplitude accentuating,
amplitude-variation with offset, spectral decomposition, and inversion.
Principal component analysis (PCA), a linear quantitative technique, has proven to
be an approach for use in understanding which seismic attributes or combination of
seismic attributes has interpretive significance. The PCA reduces a large set of
seismic attributes to indicate variations in the data, which often relate to geologic
features of interest. PCA, as a tool used in an interpretation workflow, can help to
determine meaningful seismic attributes
6. Unsupervised Classification of Attributes
Classification without supervision of patterns into groups is formally called
clustering.
Depending on the application area these patterns are called data lists, observations
or vectors.
For exploration geophysicists, these patterns are usually associated with seismic
attributes, seismic waveforms or seismic facies.
The main objective here is to show how one of the most popular clustering
algorithms - Kohonen Self-Organizing Maps (KSOM), can be applied to enhance
seismic interpretation analysis associated with one and two-dimensional color maps.
7. Kohonen Self-Organizing Map (KSOM)
The KSOM (Kohonen, 2001) clustering is one of the most commonly used tools for
non-supervised seismic facies analysis, with KSOM providing ordered clusters that
can be mapped to a gradational color bar (Coléou et al., 2003).
KSOM is closely related to vector quantization methods (Haykin, 1999).
We assume input variables, i.e., the seismic attributes, can be represented by
vectors in the space ℜn, aj = [aj1, aj2, ..., ajN], j = 1, 2, ..., J; where N is the
number of seismic attributes and J is the number of seismic traces when KSOM is
applied to surface attributes or is the number of voxels (Matos et al., 2005) when
KSOM is applied to volumetric attributes.
The objective of the algorithm is to organize the dataset of input seismic attributes,
into a geometric structure called the KSOM.
8. Iteration to create Clustering
If we assume that the Self-Organizing Map has P units, defined as prototype vectors,
then, there will exist P N-dimensional prototype vectors mi, mi = [mi1, ..., miN], i =
1, 2, ..., P; connected to its neighbors by a grid of lower dimension than P. Usually,
this grid has dimension one or two and is related to KSOM dimensionality. 2D KSOM
is most commonly represented by hexagonal or rectangular structural grids. After
initializing the KSOM prototype vectors to reasonably span the data space, the next
training step in KSOM is to choose a representative subset of the J input vectors.
Each training vector is associated with the nearest prototype vector. After each
iteration of the training, the mean and standard deviation of the input vectors
associated with each prototype vector is accumulated, after which the prototype
vectors are updated using a function of the distance between it and its neighbors
(Kohonen, 2001). This iterative process stops either when the KSOM converges or
the training process reaches a predetermined number of iterations.
9. Classification according to KSOM
KSOM places the prototype vectors on a regular low-dimension grid in an ordered
fashion (Kohonen, 2001) and after training, the prototype vectors form a good
representation of the input dataset of seismic attributes. Next, we label each input
seismic attribute vector by the index of the closest KSOM prototype vector, i.e., the
KSOM index with highest cross-correlation to the input data vector. This labeling
process is called classification (Kohonen, 2001). KSOM can be considered an
unsupervised classification algorithm because no previous information is used to
generate the prototype vectors. KSOM can easily be supervised also (Kohonen,
2001).
10. Training the Data
The number of prototype vectors in the map determines both its effectiveness and
generalization capacity. During the training, KSOM forms an elastic net that adapts
to the "cloud" formed by the input seismic attribute data.
Data that are close to each other in the input space will also be close to each other
in the output map. Since KSOM can be interpreted as a reduced version of the input
n-dimensional data ruled by a lower dimensional grid that attempts to preserve the
original topological structure and since seismic data measures the changes in
geology.
KSOM approximates the topological relation of the underlying geology.
11. Cluster Formation of Attributes
Although the prototype vectors represent the input data very well they have the
same dimension of the input data making visualization difficult. For this reason, we
exploit the topological relation among the prototype vectors as a visualization tool to
display the different data characteristics and structuring. One way to visualize
cluster formation of the KSOM prototype vectors is by computing the distance
among the vectors thereby generating a U-matrix (Ultsch, 1993). Another way is by
mapping continuous 1D, 2D or 3D color bars to the SOM topology to represent the
location of each prototype vector.
KSOM can be applied to volumetric or surface attributes.
12. Color Maps of KSOM
These attributes are input to Self-Organizing-Map (SOM) training. The SOM, a form
of Unsupervised Neural Networks (UNN), has proven to take many of these seismic
attributes and produce meaningful and easily interpretable results.
SOM analysis reveals the natural clustering and patterns in data and has been
beneficial in defining stratigraphy, seismic facies, direct hydrocarbon indicator
features, and aspects of shale plays, such as fault/fracture trends and sweet spots.
Visualization and application of 2D color maps, SOM routinely identifies meaningful
geologic patterns. Recent work using SOM and PCA has revealed geologic features
that were not previously identified or easily interpreted from the seismic data.
The ultimate goal in this multi-attribute analysis is to enable the geoscientist to
produce a more accurate interpretation and reduce exploration and development
risk.
13. KSOM of Seismic Attributes
Instantaneous Q and Bandwidth have similar patterns.
12x12 and 8x8 cells have comparable shapes.
12x12 cells have slightly clearer clusters.
• Similar weight patterns, but opposite trend should reveal
different trend in N dimensional space
• Keep both attributes for classification
• Use high number of classes to give greater discrimination
• Clustering maximas/ minimas indicate ability for strong
discrimination
• Similar weight patterns favors Neural Network solution with
one attribute based on other criteria
Instantaneous Q Bandwidth
12x12
8x8
14. KSOM of Seismic Attributes
SeismicAttributes;AVO slope, SeismicTime andVelocity.
AVO Slope exhibits high clustering degree, as for time and velocity attributes both mimic depth
trend only.
15. Color Map of KSOM result
SeismicAttributes;AVO slope, SeismicTime andVelocity.
AVO Slope exhibits high clustering degree, as for time and velocity attributes both mimic depth
trend only.
Linear Color Bar, based onValues
obtained in weight diagrams
2D Color Bar map in the linear scheme
17. Simulated Annealing
Simulated Annealing (SA) based classification systems can be used in seismic
mapping. SA has been shown to be able to overcome the local minimum problem
that is typical with many unsupervised classification approaches.
SA-based classification systems could help overcome the local minimum problem in
one of such approaches, K-means, and thus improve the classification performance.
We have two SA based classification systems, the Single SA-based (S-SA) system is
developed based on the standard SA algorithm and the Integrated SA-based (I-SA)
system developed by combining the standard SA algorithm and K-means into a two-
level classification system. Experimental results have demonstrated that the SA-
based systems significantly improved the classification accuracy over that of the K-
means algorithm when appropriate parameters were chosen. The I-SA system was
shown to produce a satisfactory classification more efficiently than the S-SA system.
18. WHAT ABOUTTHETRAPS?
So far, we have talked about how to reveal the most important attributes of seismic
data, regardless of data types, and then how to then use these in an unsupervised
manner to classify the Seismic data.
This should assist us in a more robust and timely manner a seismic dataset which
should be able to assist us in better identify traps to search for hydrocarbons.
As the new classified data set now should have the ability to better reveal geological
and potentially fluid and rock properties, the interpreter is now standing in front of
the task to be able to identify traps, or should we use the word geometries.
19. Reveal the traps with help of
Convergent Neural Network
So far, we have talked about how to reveal the most important attributes of seismic
data, regardless of data types, and then how to then use these in an unsupervised
manner to classify the Seismic data.
This should assist us in a more robust and timely manner a seismic dataset which
should be able to assist us in better identify traps to search for hydrocarbons.
As the new classified data set now should have the ability to better reveal geological
and potentially fluid and rock properties, the interpreter is now standing in front of
the task to be able to identify traps/ geometries.
20. TRAINYOUR DATATO SEETHETRAPS
The current excitement about Artificial Intelligence (AI) stems, in great part, from
groundbreaking advances involving what are known as Convolutional Neural Networks (CNN).
This machine learning technique promises dramatic improvements in things like computer
vision, speech recognition, and natural language processing.
You probably have heard of it by its more layperson-friendly name: "Deep Learning."
21. MASSIVE AMOUNT OF DATA IN NEED OF
TRAINING
You haveTerabytes uponTerabytes of various seismic data, either in its raw, amplitude or
derivative formats. Most of the time it lies there idle, and waiting for the geoscientist to log in
and take it into use.
Why not let the data work when it is not used by the geoscientist, and outside working hours
for the poor geoscientist being home an getting a well deserved sleep?
The data can in the meantime do its exercise and training and get ready for the geoscientist
logging in and begin his/her work with a more intelligent data set than last time.
A dataset which now can tell the geoscientist much more, and reveal much more, making it
possible to make the next discovery of hydrocarbons with larger chance of success at a much
lower cost and less time efforts.
22. RECOGNISE SEISMIC FACIESWITH IMAGE
RECOGNITION PLATFORMS
The rapid rise of computer vision technology and the increasing number of companies
developing image recognition platforms are enormous.
Until recently, computer vision technology has been used primarily for detecting and
recognizing faces in photos. While facial recognition remains a popular use of this
technology, there has been a rapid rise in the use of computer vision for automatic photo
tagging and classification.
This increase is largely due to recent advances in artificial intelligence (AI), specifically
the use of convolutional neural networks (CNNs) to improve computer vision methods.
So far, this technology has not won any major terrain within the Oil and Gas Industry.
23. PATTERN RECOGNITION
Stratigraphic interpretation of seismic data is a time consuming and highly subjective
methodology where the result is highly dependent upon the operators skills, training and
mostly experience to recognize depositional environments and their associated geometrical
attitude and occurrence.
Combine this with varying quality of the data foundation, seismic data quality and type, there
are many ways this could go wrong.
The task at hand is to identify geometric patterns in the data, generate image captions/
descriptions
24. CONVOLUTIONAL NEURAL NETWORKS
AND SEISMIC FACIES
Why not use computer vision algorithms to analyze digitized images of seismic data (original or
attribute versions, does not matter).The algorithms could be trained to detect and understand visual
similarities in seismic facies pattern and automatically classify these based on style, occurrence etc.
UtilizeConvolutional Neural Networks (CNN) that are able to learn complex visual concepts using
massive amounts of data,, could save time and efforts, but not only that; create a more objective
analysis of the data.
The use of machine learning and image processing algorithms to analyze, recognize and understand
visual content could prove to be a ground breaking way to analyze large amount of data, both in
Supervised Neural Networks (SNN), but also as Unsupervised Neural Networks (UNN), like the CNN.
The computer gets trained to find patterns within the data with the use of deep learning-based
computer vision technology to analyze, recognize and understand the content of an image.
25. COMPUTERVISIONTECHNOLOGY COMES
TO AID SEEINGTHE SEISMIC FACIES
The concept of CNN has been around since the 1940s, it is only within the last few years that the use
of CNNs has really taken off.
CNNs are being used to significantly improve computer vision, speech recognition, natural language
processing and other related technologies.
Companies are doing amazing research in the field of artificial intelligence, and democratizing
breakthroughs in AI.
With so many advances in deep learning-based computer vision technology happening just within
the last few years, it will be exciting to see how we can use this field of computer vision in the not-
too-distant future within Seismic Stratigraphy applications.
26. WHAT IS SEISMIC STRATIGRAPHY
ANDWHY IS IT SO IMPORTANT?
Seismic Stratigraphy is basically a geologic approach to the stratigraphic interpretation of
seismic data.
Seismic reflections allow the direct application of
geologic concepts based on physical stratigraphy.
Primary seismic reflections are generated by physical surface in
the rocks, consisting mainly of strata surface and unconformities with velocity-density contrasts.
Therefore, possible to identify primary seismic reflections parallel strata surface and
unconformities.
A seismic section is a record of chronostratigraphic (time-stratigraphic) depositional and
structural patterns and not a record of the time-transgressive lithostratigraphy (rock-stratigraphy)
27. SEISMIC STRATIGRAPHIC INTERPRETATION IS A
MASSIVE PATTERN RECOGNITION EFFORT
It is possible to make the following types of stratigraphic interpretation from the geometry of
seismic reflections correlation patterns:
• geologic time correlations
• definition of genetic depositional units
• thickness and depositional environment of genetic units
• paleo bathymetry
• burial history
• relief and topography on unconformities
• paleogeography and geologic history
28. SEISMIC STRATIGRAPHIC INTERPRETATION
PROCEDURE
To accomplish these geologic objectives you follow three step interpretational procedure:
• seismic sequence analysis
• seismic facies analysis
• analysis of relative changes of sea-level
Seismic sequence analysis is based on the identification of stratigraphic units composed of a
relatively conformable succession of genetically related strata termed depositional sequence
The upper and lower boundaries of depositional sequences are unconformities or their
correlative conformities.
29. CONVOLUTIONAL NEURAL NETWORKS (CNN)TO
IMPROVE IDENTIFYING DEPOSITIONAL
SEQUENCES
Depositional sequence boundaries are recognized on seismic data by identifying
reflections caused by lateral terminations of strata
30. TRAININGTHE LEARNING COMPUTER
THROUGH ARTIFICIAL INTELLIGENCE
Depositional sequence boundaries are recognized on seismic data by identifying reflections caused
by lateral terminations of strata termed:
• onlap
• downlap
• toplap
• truncation
34. AUTOMATIC IDENTIFICATIONOF
PLAYTYPES, LEADS and PROSPECTS
Train your data towards well-known play types, trap types in the region and part of the
stratigraphy. In addition have a library of known types from other areas,
you never know, you might find it in your data too.
35. TAGYOUR play types, LEADS and PROSPECTS
Yes
No
Yes
No
Type a Name
Type a Name
Yes
No
Type a Name
Type a Name
Type a Name
Like the way you do in
• Facebook or
• iPhoto
You give input to the unsupervised training of your data. It will automatically identify
similar ones and/or give you a choice of places it finds similar, and you choose to tell
its right or wrong.