This was presented in the International Geosciences Symposium organized by Christ College in Kerala, India. In this presentation, I shared one of my works on the prediction of synthetic sonic log using machine learning in Volve Field, North Sea.
We have two sources for forest variables, from direct measurements, which are always expensive and
would be sparse in space, and correlated LiDAR data that has complete coverage. The Bonanza Creek Experimental Forest (BCEF) is a Long-Term Ecological Research (LTER) site consisting of vegetation and landforms typical of interior Alaska. People are interested in three forest variables: above-ground biomass (AGB); tree density (TD); basal area (BA). The brightness, greenness, and wetness tasseled cap indices can be used as covariates to explain the forest variables. In the undergraduate workshop project, students can brainstorm from the easiest regression models to more sophisticated spatial models and compare the differences
of inferences from different ideas.
Group members: Richard Groenwald, Mehmut Hatip, Katrina Lewis, Jennifer Soter, Astride Tchkaoua, Sylvester Wieb
Modern enterprise data—tracking key performance indicators like conversions or click-throughs—exhibits a pathologically high dimensionality, which requires re-thinking data representation to make analysis tractable.
In robotics, GraphSLAM is a Simultaneous localization and mapping algorithm which uses sparse information matrices produced by generating a factor graph of observation interdependencies (two observations are related if they contain data about the same landmark).
Alpine Data Labs presents a deep dive into our implementation of Multinomial Logistic Regression with Apache Spark. Machine Learning Engineer DB Tsai takes us through the technical implementation details step by step. First, he explains how the state of the art Machine Learning on Hadoop is not doing fulfilling the promise of Big Data. Next, he explains how Spark is a perfect match for machine learning through their in-memory cache-ing capability demonstrating 100x performance improvement. Third, he takes us through each aspect of a multinomial logistic regression and how this is developed with Spark APIs. Fourth, he demonstrates an extension of MLOR and training parameters. Finally, he benchmarks MLOR with 11M rows, 123 features, 11% non-zero elements with a 5 node Hadoop cluster. Finally, he shows Alpine's unique visual environment with Spark and verifies the performance with the job tracker. In conclusion, Alpine supports the state of the art Cloudera and Pivotal Hadoop clusters and performances at a level that far exceeds its next nearest competitor.
Multinomial Logistic Regression with Apache SparkDB Tsai
Logistic Regression can not only be used for modeling binary outcomes but also multinomial outcome with some extension. In this talk, DB will talk about basic idea of binary logistic regression step by step, and then extend to multinomial one. He will show how easy it's with Spark to parallelize this iterative algorithm by utilizing the in-memory RDD cache to scale horizontally (the numbers of training data.) However, there is mathematical limitation on scaling vertically (the numbers of training features) while many recent applications from document classification and computational linguistics are of this type. He will talk about how to address this problem by L-BFGS optimizer instead of Newton optimizer.
Bio:
DB Tsai is a machine learning engineer working at Alpine Data Labs. He is recently working with Spark MLlib team to add support of L-BFGS optimizer and multinomial logistic regression in the upstream. He also led the Apache Spark development at Alpine Data Labs. Before joining Alpine Data labs, he was working on large-scale optimization of optical quantum circuits at Stanford as a PhD student.
Finding Meaning in Points, Areas and Surfaces: Spatial Analysis in RRevolution Analytics
Everything happens somewhere and spatial analysis attempts to use location as an explanatory variable. Such analysis is made complex by the very many ways we habitually record spatial location, the complexity of spatial data structures, and the wide variety of possible domain-driven questions we might ask. One option is to develop and use software for specific types of spatial data, another is to use a purpose-built geographical information system (GIS), but determined work by R enthusiasts has resulted in a multiplicity of packages in the R environment that can also be used.
Dexterous In-hand Manipulation by OpenAIAnand Joshi
OpenAI has used Reinforcement Learning to train a humanoid robotic hand to rotate a cube to achieve any desired orientation. This is discussed in arXiv:1808.00177, 2019 and in the blog <openai.com/blog/learning dexterity/>. These slides present results from the paper along with a few important concepts in reinforcement learning I learnt through many other sources.
Comparing Evolved Extractive Text Summary Scores of Bidirectional Encoder Rep...University of Maribor
Slides from:
11th International Conference on Electrical, Electronics and Computer Engineering (IcETRAN), Niš, 3-6 June 2024
Track: Artificial Intelligence
https://www.etran.rs/2024/en/home-english/
We have two sources for forest variables, from direct measurements, which are always expensive and
would be sparse in space, and correlated LiDAR data that has complete coverage. The Bonanza Creek Experimental Forest (BCEF) is a Long-Term Ecological Research (LTER) site consisting of vegetation and landforms typical of interior Alaska. People are interested in three forest variables: above-ground biomass (AGB); tree density (TD); basal area (BA). The brightness, greenness, and wetness tasseled cap indices can be used as covariates to explain the forest variables. In the undergraduate workshop project, students can brainstorm from the easiest regression models to more sophisticated spatial models and compare the differences
of inferences from different ideas.
Group members: Richard Groenwald, Mehmut Hatip, Katrina Lewis, Jennifer Soter, Astride Tchkaoua, Sylvester Wieb
Modern enterprise data—tracking key performance indicators like conversions or click-throughs—exhibits a pathologically high dimensionality, which requires re-thinking data representation to make analysis tractable.
In robotics, GraphSLAM is a Simultaneous localization and mapping algorithm which uses sparse information matrices produced by generating a factor graph of observation interdependencies (two observations are related if they contain data about the same landmark).
Alpine Data Labs presents a deep dive into our implementation of Multinomial Logistic Regression with Apache Spark. Machine Learning Engineer DB Tsai takes us through the technical implementation details step by step. First, he explains how the state of the art Machine Learning on Hadoop is not doing fulfilling the promise of Big Data. Next, he explains how Spark is a perfect match for machine learning through their in-memory cache-ing capability demonstrating 100x performance improvement. Third, he takes us through each aspect of a multinomial logistic regression and how this is developed with Spark APIs. Fourth, he demonstrates an extension of MLOR and training parameters. Finally, he benchmarks MLOR with 11M rows, 123 features, 11% non-zero elements with a 5 node Hadoop cluster. Finally, he shows Alpine's unique visual environment with Spark and verifies the performance with the job tracker. In conclusion, Alpine supports the state of the art Cloudera and Pivotal Hadoop clusters and performances at a level that far exceeds its next nearest competitor.
Multinomial Logistic Regression with Apache SparkDB Tsai
Logistic Regression can not only be used for modeling binary outcomes but also multinomial outcome with some extension. In this talk, DB will talk about basic idea of binary logistic regression step by step, and then extend to multinomial one. He will show how easy it's with Spark to parallelize this iterative algorithm by utilizing the in-memory RDD cache to scale horizontally (the numbers of training data.) However, there is mathematical limitation on scaling vertically (the numbers of training features) while many recent applications from document classification and computational linguistics are of this type. He will talk about how to address this problem by L-BFGS optimizer instead of Newton optimizer.
Bio:
DB Tsai is a machine learning engineer working at Alpine Data Labs. He is recently working with Spark MLlib team to add support of L-BFGS optimizer and multinomial logistic regression in the upstream. He also led the Apache Spark development at Alpine Data Labs. Before joining Alpine Data labs, he was working on large-scale optimization of optical quantum circuits at Stanford as a PhD student.
Finding Meaning in Points, Areas and Surfaces: Spatial Analysis in RRevolution Analytics
Everything happens somewhere and spatial analysis attempts to use location as an explanatory variable. Such analysis is made complex by the very many ways we habitually record spatial location, the complexity of spatial data structures, and the wide variety of possible domain-driven questions we might ask. One option is to develop and use software for specific types of spatial data, another is to use a purpose-built geographical information system (GIS), but determined work by R enthusiasts has resulted in a multiplicity of packages in the R environment that can also be used.
Dexterous In-hand Manipulation by OpenAIAnand Joshi
OpenAI has used Reinforcement Learning to train a humanoid robotic hand to rotate a cube to achieve any desired orientation. This is discussed in arXiv:1808.00177, 2019 and in the blog <openai.com/blog/learning dexterity/>. These slides present results from the paper along with a few important concepts in reinforcement learning I learnt through many other sources.
Comparing Evolved Extractive Text Summary Scores of Bidirectional Encoder Rep...University of Maribor
Slides from:
11th International Conference on Electrical, Electronics and Computer Engineering (IcETRAN), Niš, 3-6 June 2024
Track: Artificial Intelligence
https://www.etran.rs/2024/en/home-english/
ESR spectroscopy in liquid food and beverages.pptxPRIYANKA PATEL
With increasing population, people need to rely on packaged food stuffs. Packaging of food materials requires the preservation of food. There are various methods for the treatment of food to preserve them and irradiation treatment of food is one of them. It is the most common and the most harmless method for the food preservation as it does not alter the necessary micronutrients of food materials. Although irradiated food doesn’t cause any harm to the human health but still the quality assessment of food is required to provide consumers with necessary information about the food. ESR spectroscopy is the most sophisticated way to investigate the quality of the food and the free radicals induced during the processing of the food. ESR spin trapping technique is useful for the detection of highly unstable radicals in the food. The antioxidant capability of liquid food and beverages in mainly performed by spin trapping technique.
Phenomics assisted breeding in crop improvementIshaGoswami9
As the population is increasing and will reach about 9 billion upto 2050. Also due to climate change, it is difficult to meet the food requirement of such a large population. Facing the challenges presented by resource shortages, climate
change, and increasing global population, crop yield and quality need to be improved in a sustainable way over the coming decades. Genetic improvement by breeding is the best way to increase crop productivity. With the rapid progression of functional
genomics, an increasing number of crop genomes have been sequenced and dozens of genes influencing key agronomic traits have been identified. However, current genome sequence information has not been adequately exploited for understanding
the complex characteristics of multiple gene, owing to a lack of crop phenotypic data. Efficient, automatic, and accurate technologies and platforms that can capture phenotypic data that can
be linked to genomics information for crop improvement at all growth stages have become as important as genotyping. Thus,
high-throughput phenotyping has become the major bottleneck restricting crop breeding. Plant phenomics has been defined as the high-throughput, accurate acquisition and analysis of multi-dimensional phenotypes
during crop growing stages at the organism level, including the cell, tissue, organ, individual plant, plot, and field levels. With the rapid development of novel sensors, imaging technology,
and analysis methods, numerous infrastructure platforms have been developed for phenotyping.
The ability to recreate computational results with minimal effort and actionable metrics provides a solid foundation for scientific research and software development. When people can replicate an analysis at the touch of a button using open-source software, open data, and methods to assess and compare proposals, it significantly eases verification of results, engagement with a diverse range of contributors, and progress. However, we have yet to fully achieve this; there are still many sociotechnical frictions.
Inspired by David Donoho's vision, this talk aims to revisit the three crucial pillars of frictionless reproducibility (data sharing, code sharing, and competitive challenges) with the perspective of deep software variability.
Our observation is that multiple layers — hardware, operating systems, third-party libraries, software versions, input data, compile-time options, and parameters — are subject to variability that exacerbates frictions but is also essential for achieving robust, generalizable results and fostering innovation. I will first review the literature, providing evidence of how the complex variability interactions across these layers affect qualitative and quantitative software properties, thereby complicating the reproduction and replication of scientific studies in various fields.
I will then present some software engineering and AI techniques that can support the strategic exploration of variability spaces. These include the use of abstractions and models (e.g., feature models), sampling strategies (e.g., uniform, random), cost-effective measurements (e.g., incremental build of software configurations), and dimensionality reduction methods (e.g., transfer learning, feature selection, software debloating).
I will finally argue that deep variability is both the problem and solution of frictionless reproducibility, calling the software science community to develop new methods and tools to manage variability and foster reproducibility in software systems.
Exposé invité Journées Nationales du GDR GPL 2024
Seminar of U.V. Spectroscopy by SAMIR PANDASAMIR PANDA
Spectroscopy is a branch of science dealing the study of interaction of electromagnetic radiation with matter.
Ultraviolet-visible spectroscopy refers to absorption spectroscopy or reflect spectroscopy in the UV-VIS spectral region.
Ultraviolet-visible spectroscopy is an analytical method that can measure the amount of light received by the analyte.
Remote Sensing and Computational, Evolutionary, Supercomputing, and Intellige...University of Maribor
Slides from talk:
Aleš Zamuda: Remote Sensing and Computational, Evolutionary, Supercomputing, and Intelligent Systems.
11th International Conference on Electrical, Electronics and Computer Engineering (IcETRAN), Niš, 3-6 June 2024
Inter-Society Networking Panel GRSS/MTT-S/CIS Panel Session: Promoting Connection and Cooperation
https://www.etran.rs/2024/en/home-english/
ISI 2024: Application Form (Extended), Exam Date (Out), EligibilitySciAstra
The Indian Statistical Institute (ISI) has extended its application deadline for 2024 admissions to April 2. Known for its excellence in statistics and related fields, ISI offers a range of programs from Bachelor's to Junior Research Fellowships. The admission test is scheduled for May 12, 2024. Eligibility varies by program, generally requiring a background in Mathematics and English for undergraduate courses and specific degrees for postgraduate and research positions. Application fees are ₹1500 for male general category applicants and ₹1000 for females. Applications are open to Indian and OCI candidates.
Richard's aventures in two entangled wonderlandsRichard Gill
Since the loophole-free Bell experiments of 2020 and the Nobel prizes in physics of 2022, critics of Bell's work have retreated to the fortress of super-determinism. Now, super-determinism is a derogatory word - it just means "determinism". Palmer, Hance and Hossenfelder argue that quantum mechanics and determinism are not incompatible, using a sophisticated mathematical construction based on a subtle thinning of allowed states and measurements in quantum mechanics, such that what is left appears to make Bell's argument fail, without altering the empirical predictions of quantum mechanics. I think however that it is a smoke screen, and the slogan "lost in math" comes to my mind. I will discuss some other recent disproofs of Bell's theorem using the language of causality based on causal graphs. Causal thinking is also central to law and justice. I will mention surprising connections to my work on serial killer nurse cases, in particular the Dutch case of Lucia de Berk and the current UK case of Lucy Letby.
Professional air quality monitoring systems provide immediate, on-site data for analysis, compliance, and decision-making.
Monitor common gases, weather parameters, particulates.
Machine Learning Applications in Subsurface Analysis: Case Study in North Sea
1. Machine Learning Application in
Subsurface Analysis: Case Study in
the North Sea
Yohanes Nuwara
December 2, 2020
International Geosciences
Symposium
2. Outlines
● What is Machine Learning ?
● Machine learning in geoscience
● Machine learning workflow
● Case study in Volve field, North Sea
● Exploratory data analysis & Pre-processing
● Prediction
● Conclusion
3. What is Machine Learning?
● An algorithm-assisted process …
● … that learns through data (as an input),
● … train the data
● … to fit to a mathematical model,
● … and finally output a prediction
Source: Federated Learning (Google AI)
4. Solving Specific Problems in Geoscience with
ML
● Missing geophysical log traces – supervised learning prediction
● Generating facies model based on geophysical logs from the nearby wells –
supervised learning classification
● Clustering different facies – unsupervised learning classification
● Fault identification – supervised learning (Convolutional Neural Networks / CNN)
● Salt body (or other anomalies e.g. gas chimney) – supervised learning (CNN)
● Rock typing – unsupervised learning (Self Organizing Maps / SOM)
6. Exploratory Data
Analysis
Feature selection
Feature
engineering
Data normalization Removing outliers
Train-test split
1st training and
prediction
Metric of
each model
Hyperparameter
tuning
True vs
Predicted
Validation
Best hyper-
parameters
Final prediction
Final
predicted
result
Feature
and target
data
8. Data overview
● Volve field dataset is a massive volume of data released for public by Equinor in 2018
● There are data from 20+ wells, but only 5 are used for now
● 5 wells are: 15/9-F-11A, 15/9-F-11B, 15/9-F-1A, 15/9-F-1B, and 15/9-F-1C (just call them well
1, 2, 3, 4, and 5)
● Well 1, 3, and 4 have DT log; well 2 and 5 don’t have DT log
● Our objective now: use machine learning to predict DT log on well 2 and 5
12. Exploratory Data Analysis 1 (Pairplot)
● Pairplot shows how the data is distributed among itself (univariate) and with one
another (multivariate)
● Diagonal shows the histogram OR “probability density function” (univariate)
● Non-diagonal shows the crossplot of one log and another (multivariate)
13. Diagonal is the
“probabilistic
density function”
Non-diagonal is the
crossplot between
logs
A
B
C
D
E
F
G
(A) NPHI shows a
left-skewed
distribution
(B) RT shows a
“spike” distribution
(C) Outliers can be
seen
(D) Positive
correlation between
DT and NPHI
(E) Positive
correlation between
RHOB and PEF
(F) Negative
correlation between
RHOB and DT
(G) Less correlation
between DT and
CALI
14. Exploratory Data Analysis 2 (Heatmap)
● Correlation heatmap shows the correlation between two variables (logs) in a visual way
● Calculate the correlation (2 kinds: Pearson or Spearman)
● Heatmap will tell which features should be phased out – two variables that have LOW
correlation shouldn’t be used as feature for prediction
Pearson’s
correlation
Spearman’s
correlation
15.
16. Dealing with missing data
● Missing data (NaN; non-numerical value) will be a problem for ML
● 2 ways to handle the missing data:
○ Drop all observations (rows) that have NaN – if we have small dataset, this is not a suggested way
round
○ Impute the NaN with the mean value of the data – also called as imputation
17. Missing data of facies of 233 wells in the North Sea (below are the 25 wells)
GEOLINK data
18. Feature engineering
● Feature engineering is a way to transform a variable into a new feature using any
mathematical functions (e.g. log, exponent, etc.) – you can also create new features.
● In petrophysics we know that we’ll visualize RT better if we make it in a semilog plot
● Therefore, we can transform RT into log(RT)
● Then see the new feature using pairplot
19. Look at this RT now
is better distributed
(no spike anymore)
20. Data transformation
● Also known as feature scaling
● The objective is to make the distribution be more Gaussian – less skewed
● There are two basic ways: standardization and normalization
● Standardization 🡪 transforming the data by using its standard deviation and average
● Normalization 🡪 transforming the data by using its min and max value
● Other methods such as power transform (box-cox or Yeo-Johnson method) and
regularization (L1 and L2 norm)
● We need to compare which one is the best?
23. Outlier removal
● We saw many outliers in the data
● In fact, machine learning works better if outliers are minimized
● The most basic way to remove outliers is by restricting the data only within the std –
and outside the std, are outliers.
● There are lots of other methods: Isolation Forest, Minimum Covariance (elliptic
envelope method), Local Outlier Factor, and One-class SVM
● Again we compare which one is the best?
27. First attempt
● The objective of our 1st attempt is:
○ Compare which regression model is the best to predict DT log
○ Validate our prediction by comparing the true vs. predicted result
● Then, we fit the train data with the regressors, and predict
to the test data – we get the predicted DT
● We compare the true vs. predicted DT of the wells
● We also print metrics (RMSE, R²) to evaluate the
performance of each regressor Well 1
+ Well 3
+ Well 4
Well 1
Well 3
Well 4
Trai
n
Tes
t
30. Which one is the best?
● We can see that Gradient Boosting performs the best
● It has the highest R² = 0.95 and lowest RMSE = 0.22
● This is understandable because as per our earlier definition, GB is an ensemble
algorithm that boosts weaker regressors, typically the CART.
● Can we improve performance? – yes, we do hyperparameter tuning
31. Hyperparameter Tuning
● An optimization algorithm to search the best hyperparameters for the regressor we use
to optimize the prediction.
● What is hyperparameters – they are variables that we use for the regressors to predict,
that is independent of our data
● Example of hyperparameters – K value in KNN, learning rate in neural network
● We use grid search CV
Without tuned hyperparameters (default) With tuned hyperparameters
33. ● Three wells in the Volve field (F11A, F1A, and F1B) are trained to predict two wells
(F11B and F1C) that don’t have P-sonic log
● Data normalization and removing outliers are critical step in machine learning
● The best performing regressor is Gradient Boosting
● Hyperparameter tuning are useful to find the best hyperparameters for regressor
(although takes more time)
Conclusion
34. Thank you
Want to discuss?
E-mail : ign.nuwara97@gmail.com
LinkedIn : https://www.linkedin.com/in/yohanesnuwara/