The document discusses optimizing ORM framework performance configurations using a genetic algorithm. It finds that the default configurations are often significantly slower than optimal configurations. A genetic algorithm can guide the optimization of configurations across objectives like CPU usage, memory usage, and execution time. Experiments show the genetic algorithm finds configurations close to optimal in 5-20 minutes, much faster than trial-and-error. Developers should care about ORM performance configuration due to potential performance degradations from defaults.
Başarı Hedefli, HORIZON2020 ve TÜBİTAK Destekleri Proje Yazımı - Muzaffer ÖztanİTÜ Çekirdek
Muzaffer Öztan Kimdir?
Muzaffer Öztan Alfanorm firmasının kurucu ortağıdır. İşletme Lisans eğitimi ve ardından Sosyoloji/Felsefe eğitimlerine paralel olarak Borusan'da 10 yıl süren ilk iş deneyimini yaşamıştır. Bu süreçte Endüstriyel Reklamcılık ve halkla ilişkiler konularında pek çok ilklerin gerçekleştirildiği projeler üretmiştir. İçinde olduğu tüm projelerinde inovatif ürün tasarımı, teknoloji transferi ve farklı sektörlerin dijitalleşmesi yönünde çalışmalar yapmış ve yapmaktadır. Uzun iş yaşamında pek çok sektörle ilişkisi olmuş ve özellikle KOBİ'ler ile girişimcilerin sorunlarına empati ile çözümler üretmesi ve gönüllü eğitimler vermesi başarısının temelini oluşturmuştur. Elde ettiği deneyimlerin itici gücü ile kurucu ortak olarak üç şirket kurarak, beş başarılı inovatif girişimi hayata geçirmiştir. ALFANORM şirketinde altıncı girişimini yönetmeye devam etmektedir. HORİZON 2020 ve TÜBİTAK projelerinin hazırlanmasını yönetmektedir, aynı zamanda bir Melek Yatırımcıdır. Muzaffer Öztan günümüzde Alfanormda "Fikirden Tasarıma, Tasarımdam Üretime" felsedesini hayata geçirmek üzere tasarım, mühendislik, prototip üretimi yapılan bir örnek inovasyon platformunun içinde yer almaktadır.
SANER 2015 ERA track: Differential Flame Graphscorpaulbezemer
Flame graphs can be used to analyze software profiles. We introduce the differential flame graph, which can be used to detect and analyze regressions in those profiles.
The reliability of a prediction model depends on the quality of the data from which it was trained. Therefore, defect prediction models may be unreliable if they are trained using noisy data. Recent research suggests that randomly-injected noise that changes the classification (label) of software modules from defective to clean (and vice versa) can impact the performance of defect models. Yet, in reality, incorrectly labelled (i.e., mislabelled) issue reports are likely non-random. In this paper, we study whether mislabelling is random, and the impact that realistic mislabelling has on the performance and interpretation of defect models. Through a case study of 3,931 manually-curated issue reports from the Apache Jackrabbit and Lucene systems, we find that: (1) issue report mislabelling is not random; (2) precision is rarely impacted by mislabelled issue reports, suggesting that practitioners can rely on the accuracy of modules labelled as defective by models that are trained using noisy data; (3) however, models trained on noisy data typically achieve 56%-68% of the recall of models trained on clean data; and (4) only the metrics in top influence rank of our defect models are robust to the noise introduced by mislabelling, suggesting that the less influential metrics of models that are trained on noisy data should not be interpreted or used to make decisions.
Başarı Hedefli, HORIZON2020 ve TÜBİTAK Destekleri Proje Yazımı - Muzaffer ÖztanİTÜ Çekirdek
Muzaffer Öztan Kimdir?
Muzaffer Öztan Alfanorm firmasının kurucu ortağıdır. İşletme Lisans eğitimi ve ardından Sosyoloji/Felsefe eğitimlerine paralel olarak Borusan'da 10 yıl süren ilk iş deneyimini yaşamıştır. Bu süreçte Endüstriyel Reklamcılık ve halkla ilişkiler konularında pek çok ilklerin gerçekleştirildiği projeler üretmiştir. İçinde olduğu tüm projelerinde inovatif ürün tasarımı, teknoloji transferi ve farklı sektörlerin dijitalleşmesi yönünde çalışmalar yapmış ve yapmaktadır. Uzun iş yaşamında pek çok sektörle ilişkisi olmuş ve özellikle KOBİ'ler ile girişimcilerin sorunlarına empati ile çözümler üretmesi ve gönüllü eğitimler vermesi başarısının temelini oluşturmuştur. Elde ettiği deneyimlerin itici gücü ile kurucu ortak olarak üç şirket kurarak, beş başarılı inovatif girişimi hayata geçirmiştir. ALFANORM şirketinde altıncı girişimini yönetmeye devam etmektedir. HORİZON 2020 ve TÜBİTAK projelerinin hazırlanmasını yönetmektedir, aynı zamanda bir Melek Yatırımcıdır. Muzaffer Öztan günümüzde Alfanormda "Fikirden Tasarıma, Tasarımdam Üretime" felsedesini hayata geçirmek üzere tasarım, mühendislik, prototip üretimi yapılan bir örnek inovasyon platformunun içinde yer almaktadır.
SANER 2015 ERA track: Differential Flame Graphscorpaulbezemer
Flame graphs can be used to analyze software profiles. We introduce the differential flame graph, which can be used to detect and analyze regressions in those profiles.
The reliability of a prediction model depends on the quality of the data from which it was trained. Therefore, defect prediction models may be unreliable if they are trained using noisy data. Recent research suggests that randomly-injected noise that changes the classification (label) of software modules from defective to clean (and vice versa) can impact the performance of defect models. Yet, in reality, incorrectly labelled (i.e., mislabelled) issue reports are likely non-random. In this paper, we study whether mislabelling is random, and the impact that realistic mislabelling has on the performance and interpretation of defect models. Through a case study of 3,931 manually-curated issue reports from the Apache Jackrabbit and Lucene systems, we find that: (1) issue report mislabelling is not random; (2) precision is rarely impacted by mislabelled issue reports, suggesting that practitioners can rely on the accuracy of modules labelled as defective by models that are trained using noisy data; (3) however, models trained on noisy data typically achieve 56%-68% of the recall of models trained on clean data; and (4) only the metrics in top influence rank of our defect models are robust to the noise introduced by mislabelling, suggesting that the less influential metrics of models that are trained on noisy data should not be interpreted or used to make decisions.
Modular Multi-Objective Genetic Algorithm for Large Scale Bi-level ProblemsStefano Costanzo
A genetic algorithm is used to solve the Centralised Peak-Load Pricing model on the European Air Traffic Management system. The Stackelberg equilibrium is obtained by means of an optimisation problem formulated as a bilevel linear programming model where the Central Planner sets one peak and one off-peak en-route charge and the Airspace Users choose the route among the available alternatives.
A Multi-Objective Genetic Algorithm for Pruning Support Vector MachinesMohamed Farouk
Support vector machines (SVMs) often contain a
large number of support vectors which reduce the run-time
speeds of decision functions. In addition, this might cause an
overfitting effect where the resulting SVM adapts itself to the
noise in the training set rather than the true underlying data
distribution and will probably fail to correctly classify unseen
examples. To obtain more fast and accurate SVMs, many
methods have been proposed to prune SVs in trained SVMs.
In this paper, we propose a multi-objective genetic algorithm
to reduce the complexity of support vector machines as well
as to improve generalization accuracy by the reduction of
overfitting. Experiments on four benchmark datasets show that
the proposed evolutionary approach can effectively reduce the
number of support vectors included in the decision functions
of SVMs without sacrificing their classification accuracy.
Multi-objective Genetic Algorithm Applied to Conceptual Design of Single-stag...Masahiro Kanazaki
"Multi-objective Genetic Algorithm Applied to Conceptual Design of Single-stage Rocket Using Hybrid Propulsion System" presented at The Eighth China-Japan-Korea Joint Symposium on Optimization of Structural and Mechanical Systems (CJK-OSM).
Artificial intelligence (AI) is everywhere, promising self-driving cars, medical breakthroughs, and new ways of working. But how do you separate hype from reality? How can your company apply AI to solve real business problems?
Here’s what AI learnings your business should keep in mind for 2017.
TEDx Manchester: AI & The Future of WorkVolker Hirsch
TEDx Manchester talk on artificial intelligence (AI) and how the ascent of AI and robotics impacts our future work environments.
The video of the talk is now also available here: https://youtu.be/dRw4d2Si8LA
NVMe storage systems and NVMe networks promise to reduce latency further and increase performance beyond what SAS based flash systems and current networking technology can deliver. To take advantage of that performance gain however, the data center must have workloads that can take advantage of all the latency reduction and performance improvements that NVMe offers. Vendors emphatically state that NVMe is the next must-have technology, yet many still continue to provide SAS based arrays using traditional networks.
How do IT planners know then, that investing in NVMe will truly provide their organizations the benefits of NVMe for their demanding applications and see a measurable return on investment? Just creating a test environment to perform an NVMe evaluation can break the IT budget!
Register now to join Storage Switzerland, Virtual Instruments, and SANBlaze as we look at the state of the data center and provide IT planners with the information they need to decide if NVMe is an investment they should make now or if they should wait a year or more. The key is determining which applications can benefit from NVMe-based approaches.
In this event, IT professionals will learn
- About NVMe, NVMe Storage Systems and NVMe over Fabric Networking
- The Performance Potential of NVMe Storage and Networks
- What attributes are needed for a workload to take advantage of NVMe
- Why NVMe creates problems for current IT testing strategies
- Why a Workload Simulation approach is the only practical way to test NVMe
- How to build a storage performance validation practice
Scott Clark, Co-Founder and CEO, SigOpt at MLconf SF 2016MLconf
Using Bayesian Optimization to Tune Machine Learning Models: In this talk we briefly introduce Bayesian Global Optimization as an efficient way to optimize machine learning model parameters, especially when evaluating different parameters is time-consuming or expensive. We will motivate the problem and give example applications.
We will also talk about our development of a robust benchmark suite for our algorithms including test selection, metric design, infrastructure architecture, visualization, and comparison to other standard and open source methods. We will discuss how this evaluation framework empowers our research engineers to confidently and quickly make changes to our core optimization engine.
We will end with an in-depth example of using these methods to tune the features and hyperparameters of a real world problem and give several real world applications.
Modular Multi-Objective Genetic Algorithm for Large Scale Bi-level ProblemsStefano Costanzo
A genetic algorithm is used to solve the Centralised Peak-Load Pricing model on the European Air Traffic Management system. The Stackelberg equilibrium is obtained by means of an optimisation problem formulated as a bilevel linear programming model where the Central Planner sets one peak and one off-peak en-route charge and the Airspace Users choose the route among the available alternatives.
A Multi-Objective Genetic Algorithm for Pruning Support Vector MachinesMohamed Farouk
Support vector machines (SVMs) often contain a
large number of support vectors which reduce the run-time
speeds of decision functions. In addition, this might cause an
overfitting effect where the resulting SVM adapts itself to the
noise in the training set rather than the true underlying data
distribution and will probably fail to correctly classify unseen
examples. To obtain more fast and accurate SVMs, many
methods have been proposed to prune SVs in trained SVMs.
In this paper, we propose a multi-objective genetic algorithm
to reduce the complexity of support vector machines as well
as to improve generalization accuracy by the reduction of
overfitting. Experiments on four benchmark datasets show that
the proposed evolutionary approach can effectively reduce the
number of support vectors included in the decision functions
of SVMs without sacrificing their classification accuracy.
Multi-objective Genetic Algorithm Applied to Conceptual Design of Single-stag...Masahiro Kanazaki
"Multi-objective Genetic Algorithm Applied to Conceptual Design of Single-stage Rocket Using Hybrid Propulsion System" presented at The Eighth China-Japan-Korea Joint Symposium on Optimization of Structural and Mechanical Systems (CJK-OSM).
Artificial intelligence (AI) is everywhere, promising self-driving cars, medical breakthroughs, and new ways of working. But how do you separate hype from reality? How can your company apply AI to solve real business problems?
Here’s what AI learnings your business should keep in mind for 2017.
TEDx Manchester: AI & The Future of WorkVolker Hirsch
TEDx Manchester talk on artificial intelligence (AI) and how the ascent of AI and robotics impacts our future work environments.
The video of the talk is now also available here: https://youtu.be/dRw4d2Si8LA
NVMe storage systems and NVMe networks promise to reduce latency further and increase performance beyond what SAS based flash systems and current networking technology can deliver. To take advantage of that performance gain however, the data center must have workloads that can take advantage of all the latency reduction and performance improvements that NVMe offers. Vendors emphatically state that NVMe is the next must-have technology, yet many still continue to provide SAS based arrays using traditional networks.
How do IT planners know then, that investing in NVMe will truly provide their organizations the benefits of NVMe for their demanding applications and see a measurable return on investment? Just creating a test environment to perform an NVMe evaluation can break the IT budget!
Register now to join Storage Switzerland, Virtual Instruments, and SANBlaze as we look at the state of the data center and provide IT planners with the information they need to decide if NVMe is an investment they should make now or if they should wait a year or more. The key is determining which applications can benefit from NVMe-based approaches.
In this event, IT professionals will learn
- About NVMe, NVMe Storage Systems and NVMe over Fabric Networking
- The Performance Potential of NVMe Storage and Networks
- What attributes are needed for a workload to take advantage of NVMe
- Why NVMe creates problems for current IT testing strategies
- Why a Workload Simulation approach is the only practical way to test NVMe
- How to build a storage performance validation practice
Scott Clark, Co-Founder and CEO, SigOpt at MLconf SF 2016MLconf
Using Bayesian Optimization to Tune Machine Learning Models: In this talk we briefly introduce Bayesian Global Optimization as an efficient way to optimize machine learning model parameters, especially when evaluating different parameters is time-consuming or expensive. We will motivate the problem and give example applications.
We will also talk about our development of a robust benchmark suite for our algorithms including test selection, metric design, infrastructure architecture, visualization, and comparison to other standard and open source methods. We will discuss how this evaluation framework empowers our research engineers to confidently and quickly make changes to our core optimization engine.
We will end with an in-depth example of using these methods to tune the features and hyperparameters of a real world problem and give several real world applications.
Modern business drivers are continually pushing to reduce the time it takes to get a product or service to market, reduce the risk and cost associated with that, and to improve quality.
In laboratories, delivering an analytical result that’s ‘right first time’ (RFT) is the answer. There is no reprocessing data or re-running injections and no out of specification (OOS) results or reporting/calculation errors.
Using chromatography data system tools for RFT analysis automatically gives high quality of results and confidence in results, lower cost of analysis, improved lab efficiency, and faster release to market and return on investment (ROI).
The ability to recreate computational results with minimal effort and actionable metrics provides a solid foundation for scientific research and software development. When people can replicate an analysis at the touch of a button using open-source software, open data, and methods to assess and compare proposals, it significantly eases verification of results, engagement with a diverse range of contributors, and progress. However, we have yet to fully achieve this; there are still many sociotechnical frictions.
Inspired by David Donoho's vision, this talk aims to revisit the three crucial pillars of frictionless reproducibility (data sharing, code sharing, and competitive challenges) with the perspective of deep software variability.
Our observation is that multiple layers — hardware, operating systems, third-party libraries, software versions, input data, compile-time options, and parameters — are subject to variability that exacerbates frictions but is also essential for achieving robust, generalizable results and fostering innovation. I will first review the literature, providing evidence of how the complex variability interactions across these layers affect qualitative and quantitative software properties, thereby complicating the reproduction and replication of scientific studies in various fields.
I will then present some software engineering and AI techniques that can support the strategic exploration of variability spaces. These include the use of abstractions and models (e.g., feature models), sampling strategies (e.g., uniform, random), cost-effective measurements (e.g., incremental build of software configurations), and dimensionality reduction methods (e.g., transfer learning, feature selection, software debloating).
I will finally argue that deep variability is both the problem and solution of frictionless reproducibility, calling the software science community to develop new methods and tools to manage variability and foster reproducibility in software systems.
Exposé invité Journées Nationales du GDR GPL 2024
This presentation explores a brief idea about the structural and functional attributes of nucleotides, the structure and function of genetic materials along with the impact of UV rays and pH upon them.
What is greenhouse gasses and how many gasses are there to affect the Earth.moosaasad1975
What are greenhouse gasses how they affect the earth and its environment what is the future of the environment and earth how the weather and the climate effects.
ISI 2024: Application Form (Extended), Exam Date (Out), EligibilitySciAstra
The Indian Statistical Institute (ISI) has extended its application deadline for 2024 admissions to April 2. Known for its excellence in statistics and related fields, ISI offers a range of programs from Bachelor's to Junior Research Fellowships. The admission test is scheduled for May 12, 2024. Eligibility varies by program, generally requiring a background in Mathematics and English for undergraduate courses and specific degrees for postgraduate and research positions. Application fees are ₹1500 for male general category applicants and ₹1000 for females. Applications are open to Indian and OCI candidates.
Deep Behavioral Phenotyping in Systems Neuroscience for Functional Atlasing a...Ana Luísa Pinho
Functional Magnetic Resonance Imaging (fMRI) provides means to characterize brain activations in response to behavior. However, cognitive neuroscience has been limited to group-level effects referring to the performance of specific tasks. To obtain the functional profile of elementary cognitive mechanisms, the combination of brain responses to many tasks is required. Yet, to date, both structural atlases and parcellation-based activations do not fully account for cognitive function and still present several limitations. Further, they do not adapt overall to individual characteristics. In this talk, I will give an account of deep-behavioral phenotyping strategies, namely data-driven methods in large task-fMRI datasets, to optimize functional brain-data collection and improve inference of effects-of-interest related to mental processes. Key to this approach is the employment of fast multi-functional paradigms rich on features that can be well parametrized and, consequently, facilitate the creation of psycho-physiological constructs to be modelled with imaging data. Particular emphasis will be given to music stimuli when studying high-order cognitive mechanisms, due to their ecological nature and quality to enable complex behavior compounded by discrete entities. I will also discuss how deep-behavioral phenotyping and individualized models applied to neuroimaging data can better account for the subject-specific organization of domain-general cognitive systems in the human brain. Finally, the accumulation of functional brain signatures brings the possibility to clarify relationships among tasks and create a univocal link between brain systems and mental functions through: (1) the development of ontologies proposing an organization of cognitive processes; and (2) brain-network taxonomies describing functional specialization. To this end, tools to improve commensurability in cognitive science are necessary, such as public repositories, ontology-based platforms and automated meta-analysis tools. I will thus discuss some brain-atlasing resources currently under development, and their applicability in cognitive as well as clinical neuroscience.
DERIVATION OF MODIFIED BERNOULLI EQUATION WITH VISCOUS EFFECTS AND TERMINAL V...Wasswaderrick3
In this book, we use conservation of energy techniques on a fluid element to derive the Modified Bernoulli equation of flow with viscous or friction effects. We derive the general equation of flow/ velocity and then from this we derive the Pouiselle flow equation, the transition flow equation and the turbulent flow equation. In the situations where there are no viscous effects , the equation reduces to the Bernoulli equation. From experimental results, we are able to include other terms in the Bernoulli equation. We also look at cases where pressure gradients exist. We use the Modified Bernoulli equation to derive equations of flow rate for pipes of different cross sectional areas connected together. We also extend our techniques of energy conservation to a sphere falling in a viscous medium under the effect of gravity. We demonstrate Stokes equation of terminal velocity and turbulent flow equation. We look at a way of calculating the time taken for a body to fall in a viscous medium. We also look at the general equation of terminal velocity.
The use of Nauplii and metanauplii artemia in aquaculture (brine shrimp).pptxMAGOTI ERNEST
Although Artemia has been known to man for centuries, its use as a food for the culture of larval organisms apparently began only in the 1930s, when several investigators found that it made an excellent food for newly hatched fish larvae (Litvinenko et al., 2023). As aquaculture developed in the 1960s and ‘70s, the use of Artemia also became more widespread, due both to its convenience and to its nutritional value for larval organisms (Arenas-Pardo et al., 2024). The fact that Artemia dormant cysts can be stored for long periods in cans, and then used as an off-the-shelf food requiring only 24 h of incubation makes them the most convenient, least labor-intensive, live food available for aquaculture (Sorgeloos & Roubach, 2021). The nutritional value of Artemia, especially for marine organisms, is not constant, but varies both geographically and temporally. During the last decade, however, both the causes of Artemia nutritional variability and methods to improve poorquality Artemia have been identified (Loufi et al., 2024).
Brine shrimp (Artemia spp.) are used in marine aquaculture worldwide. Annually, more than 2,000 metric tons of dry cysts are used for cultivation of fish, crustacean, and shellfish larva. Brine shrimp are important to aquaculture because newly hatched brine shrimp nauplii (larvae) provide a food source for many fish fry (Mozanzadeh et al., 2021). Culture and harvesting of brine shrimp eggs represents another aspect of the aquaculture industry. Nauplii and metanauplii of Artemia, commonly known as brine shrimp, play a crucial role in aquaculture due to their nutritional value and suitability as live feed for many aquatic species, particularly in larval stages (Sorgeloos & Roubach, 2021).
Remote Sensing and Computational, Evolutionary, Supercomputing, and Intellige...University of Maribor
Slides from talk:
Aleš Zamuda: Remote Sensing and Computational, Evolutionary, Supercomputing, and Intelligent Systems.
11th International Conference on Electrical, Electronics and Computer Engineering (IcETRAN), Niš, 3-6 June 2024
Inter-Society Networking Panel GRSS/MTT-S/CIS Panel Session: Promoting Connection and Cooperation
https://www.etran.rs/2024/en/home-english/
Nutraceutical market, scope and growth: Herbal drug technologyLokesh Patil
As consumer awareness of health and wellness rises, the nutraceutical market—which includes goods like functional meals, drinks, and dietary supplements that provide health advantages beyond basic nutrition—is growing significantly. As healthcare expenses rise, the population ages, and people want natural and preventative health solutions more and more, this industry is increasing quickly. Further driving market expansion are product formulation innovations and the use of cutting-edge technology for customized nutrition. With its worldwide reach, the nutraceutical industry is expected to keep growing and provide significant chances for research and investment in a number of categories, including vitamins, minerals, probiotics, and herbal supplements.
Richard's aventures in two entangled wonderlandsRichard Gill
Since the loophole-free Bell experiments of 2020 and the Nobel prizes in physics of 2022, critics of Bell's work have retreated to the fortress of super-determinism. Now, super-determinism is a derogatory word - it just means "determinism". Palmer, Hance and Hossenfelder argue that quantum mechanics and determinism are not incompatible, using a sophisticated mathematical construction based on a subtle thinning of allowed states and measurements in quantum mechanics, such that what is left appears to make Bell's argument fail, without altering the empirical predictions of quantum mechanics. I think however that it is a smoke screen, and the slogan "lost in math" comes to my mind. I will discuss some other recent disproofs of Bell's theorem using the language of causality based on causal graphs. Causal thinking is also central to law and justice. I will mention surprising connections to my work on serial killer nurse cases, in particular the Dutch case of Lucia de Berk and the current UK case of Lucy Letby.
Salas, V. (2024) "John of St. Thomas (Poinsot) on the Science of Sacred Theol...Studia Poinsotiana
I Introduction
II Subalternation and Theology
III Theology and Dogmatic Declarations
IV The Mixed Principles of Theology
V Virtual Revelation: The Unity of Theology
VI Theology as a Natural Science
VII Theology’s Certitude
VIII Conclusion
Notes
Bibliography
All the contents are fully attributable to the author, Doctor Victor Salas. Should you wish to get this text republished, get in touch with the author or the editorial committee of the Studia Poinsotiana. Insofar as possible, we will be happy to broker your contact.
Unveiling the Energy Potential of Marshmallow Deposits.pdf
Optimizing the Performance-Related Configurations of Object-Relational Mapping Frameworks Using a Multi-Objective Genetic Algorithm
1. Optimizing the Performance-RelatedOptimizing the Performance-Related
Configurations of ORM FrameworksConfigurations of ORM Frameworks
Ravjot Singh, Cor-Paul Bezemer,Weiyi Shang,Ahmed E. Hassan
Using a Multi-Objective Genetic AlgorithmUsing a Multi-Objective Genetic Algorithm
3. 3
27% of customer issues are27% of customer issues are
due to misconfigurationdue to misconfiguration
4. 4
20% of misconfigurations cause20% of misconfigurations cause
performance degradationsperformance degradations
5. 5
A popular online store estimates a $1.6A popular online store estimates a $1.6
billion loss for a 1-second slowdownbillion loss for a 1-second slowdown
6. 6
Databases are at the coreDatabases are at the core
of many large systemsof many large systems
10. 10
Code without ORM isCode without ORM is
tedioustedious
public class Person {
// ...
public String getName(){
sql = “SELECT name FROM … WHERE …”;
ResultSet rs = stmt.executeQuery(sql);
while(rs.next())
return rs.getString(“name”);
}
}
11. 11
Code with ORM is cleanCode with ORM is clean
@Entity
public class Person {
@Id Integer getId() { ... }
public String getName(){
return this.name;
}
}
16. 16
The impact of one ORMThe impact of one ORM
configuration optionconfiguration option
hibernate.max_fetch_depth = {0|1|2|3|...}
17. 17
The impact of one ORMThe impact of one ORM
configuration optionconfiguration option
hibernate.max_fetch_depth = {0|1|2|3|...}
Optimal configuration depends on what you need!
18. 18
Should we care about ORMShould we care about ORM
performance configuration?performance configuration?
Analyze 11 boolean configuration options
hence 211
= 2048 configurations
19. 19
We compare the 'default' withWe compare the 'default' with
the optimal configurationthe optimal configuration
Default configuration
System Tests
Run tests
Default
execution
time
20. 20
We compare the 'default' withWe compare the 'default' with
the optimal configurationthe optimal configuration
Default configuration
System Tests
Run tests
Optimal configuration
System Tests
Default
execution
time
Optimal
execution
time
Run tests
21. 21
We compare the 'default' withWe compare the 'default' with
the optimal configurationthe optimal configuration
Default configuration
System Tests
Run tests
Optimal configuration
System Tests
Compare
Default
execution
time
Optimal
execution
time
Run tests
22. 22
Yes, we should care about ORMYes, we should care about ORM
performance configuration!performance configuration!
89% 96%
of the test cases are significantly
slower using default!
23. 23
How can we guide ORMHow can we guide ORM
performance configuration?performance configuration?
25. 25
Optimize ORM configurationOptimize ORM configuration
by trial-and-errorby trial-and-error
Evaluate configuration A
on some workload W
Randomly select
configuration A
26. 26
Optimize ORM configurationOptimize ORM configuration
by trial-and-errorby trial-and-error
Evaluate configuration A
on some workload W
Compare execution time for configuration A
with execution time for current best configuration
Randomly select
configuration A
27. 27
Optimize ORM configurationOptimize ORM configuration
by trial-and-errorby trial-and-error
Evaluate configuration A
on some workload W
Compare execution time for configuration A
with execution time for current best configuration
Randomly select
configuration A
Use best configuration
28. 28
Optimize ORM configurationOptimize ORM configuration
by trial-and-errorby trial-and-error
Evaluate configuration A
on some workload W
Compare execution time for configuration A
with execution time for current best configuration
Randomly select
configuration A
Use best configuration
30. 30
Genetic algorithm conceptGenetic algorithm concept
Optimize population based on
one or more objectives
We start from the 'default'
configuration as supplied by
the developer
38. 38
Evaluation of our approachEvaluation of our approach
1. Closeness of configurations found by
our approach to the optimal
configuration
2. Speed with which we can find sub-
optimal configurations
40. 40
Closeness of configurationsCloseness of configurations
to the optimalto the optimal
We rank all existing configurations based on
dominance
Configuration A dominates configuration B if:
1. B is not better than A for all objectives
2.A is better than B for at least one objective
41. 41
Ranking of configurationsRanking of configurations
Configuration CPU usage Memory
usage
Execution time
Default 0 0 0
A +50% +50% 0
B -80% -20% 0
C 0 -80% -20%
42. 42
Ranking of configurationsRanking of configurations
Configuration CPU usage Memory
usage
Execution time
Default 0 0 0
A +50% +50% 0
B -80% -20% 0
C 0 -80% -20%
Rank 1: B and C
Rank 2: default
Rank 3:A
43. 43
The genetic algorithm findsThe genetic algorithm finds
configurations close to the best rankconfigurations close to the best rank
BEST WORST
44. 44
Speed with whichSpeed with which
configurations are foundconfigurations are found
Depends on:
- Workload
- Application
- Data
45. 45
The genetic algorithm findsThe genetic algorithm finds
configurations fast (~5-20 minutes)configurations fast (~5-20 minutes)
46. 46
Yes, we should care about ORMYes, we should care about ORM
performance configuration!performance configuration!
89% 96%
of the test cases are significantly
slower using default!