This document summarizes a research paper that uses game theory to analyze software development practices and processes. It describes how game theory can model conflicts and cooperation between decision makers. It then discusses several software development dilemmas, such as a freelancer's dilemma over work quality and a team's dilemma over quick fixes versus proper long term solutions. The researchers built simulation models of these dilemmas and analyzed how different development practices, like code reviews, affect the optimal strategies and outcomes. Their goal is to apply game theory to improve software processes and practices.
An introduction to the Open FAIR standard, a framework for analyzing and express risk in financial terms. This presentation was originally given at the Louisville Metro InfoSec Conference on 9/19/17.
Evolution as a Tool for Understanding and Designing Collaborative SystemsWilfried Elmenreich
Keynote talk by Wilfried Elmenreich at PRO-VE 2011:
Self-organizing phenomena can be found in many social systems, either forcing collaboration or destroying it. Typically, these properties have not been designed by a central ruler but evolved over time. While it is straightforward to find examples in many social systems, finding the appropriate interaction rules to design such systems from scratch is difficult due to the unpredictable or counterintuitive nature of such emergent and complex systems. Therefore, we propose evolutionary models to examine and extrapolate the effect of particular collaboration rules. Evolution, in this context, does not replace the work of analyzing complex social systems, but complements existing techniques of simulation, modeling, and game theory in order to lead for a new understanding of interrelations in collaborative systems.
In real world applications, most of the optimization problems involve more than one objective to
be optimized. The objectives in most of engineering problems are often conflicting, i.e., maximize
performance, minimize cost, maximize reliability, etc. In the case, one extreme solution would not satisfy
both objective functions and the optimal solution of one objective will not necessary be the best solution
for other objective(s). Therefore different solutions will produce trade-offs between different objectives
and a set of solutions is required to represent the optimal solutions of all objectives. Multi-objective
formulations are realistic models for many complex engineering optimization problems. Customized
genetic algorithms have been demonstrated to be particularly effective to determine excellent solutions to
these problems. A reasonable solution to a multi-objective problem is to investigate a set of solutions, each
of which satisfies the objectives at an acceptable level without being dominated by any other solution. In
this paper, an overview is presented describing various multi objective genetic algorithms developed to
handle different problems with multiple objectives.
The potential role of ai in the minimisation and mitigation of project delayPieter Rautenbach
Artificial intelligence (AI) can have wide reaching application within the construction
industry, however, the actual application of this set of technologies is currently under exploited. This
paper considers the role that the application of AI can take in optimising the efficiencies of project
execution and how this can potentially reduce project duration and minimise and mitigate delay on
projects.
Prediction of Euro 50 Using Back Propagation Neural Network (BPNN) and Geneti...AI Publications
Modeling time series is often associated with the process forecasts certain characteristics in the next period. One of the methods forecasts that developed nowadays is using artificial neural network or more popularly known as a neural network. Use neural network in forecasts time series can be a good solution, but the problem is network architecture and the training method in the right direction. One of the choices that might be using a genetic algorithm. A genetic algorithm is a search algorithm stochastic resonance based on how it works by the mechanisms of natural selection and genetic variation that aims to find a solution to a problem. This algorithm can be used as teaching methods in train models are sent back propagation neural network. The application genetic algorithm and neural network for divination time series aim to get the weight optimum. From the training and testing on the data index share price euro 50 obtained by the RMSE testing 27.8744 and 39.2852 RMSE training. The weight or parameters that produced by has reached an optimum level in second-generation 1000 with the best fitness and the average 0.027771 the fitness of 0.0027847.Model is good to be used to give a prediction that is quite accurate information that is shown by the close target with the output.
An introduction to the Open FAIR standard, a framework for analyzing and express risk in financial terms. This presentation was originally given at the Louisville Metro InfoSec Conference on 9/19/17.
Evolution as a Tool for Understanding and Designing Collaborative SystemsWilfried Elmenreich
Keynote talk by Wilfried Elmenreich at PRO-VE 2011:
Self-organizing phenomena can be found in many social systems, either forcing collaboration or destroying it. Typically, these properties have not been designed by a central ruler but evolved over time. While it is straightforward to find examples in many social systems, finding the appropriate interaction rules to design such systems from scratch is difficult due to the unpredictable or counterintuitive nature of such emergent and complex systems. Therefore, we propose evolutionary models to examine and extrapolate the effect of particular collaboration rules. Evolution, in this context, does not replace the work of analyzing complex social systems, but complements existing techniques of simulation, modeling, and game theory in order to lead for a new understanding of interrelations in collaborative systems.
In real world applications, most of the optimization problems involve more than one objective to
be optimized. The objectives in most of engineering problems are often conflicting, i.e., maximize
performance, minimize cost, maximize reliability, etc. In the case, one extreme solution would not satisfy
both objective functions and the optimal solution of one objective will not necessary be the best solution
for other objective(s). Therefore different solutions will produce trade-offs between different objectives
and a set of solutions is required to represent the optimal solutions of all objectives. Multi-objective
formulations are realistic models for many complex engineering optimization problems. Customized
genetic algorithms have been demonstrated to be particularly effective to determine excellent solutions to
these problems. A reasonable solution to a multi-objective problem is to investigate a set of solutions, each
of which satisfies the objectives at an acceptable level without being dominated by any other solution. In
this paper, an overview is presented describing various multi objective genetic algorithms developed to
handle different problems with multiple objectives.
The potential role of ai in the minimisation and mitigation of project delayPieter Rautenbach
Artificial intelligence (AI) can have wide reaching application within the construction
industry, however, the actual application of this set of technologies is currently under exploited. This
paper considers the role that the application of AI can take in optimising the efficiencies of project
execution and how this can potentially reduce project duration and minimise and mitigate delay on
projects.
Prediction of Euro 50 Using Back Propagation Neural Network (BPNN) and Geneti...AI Publications
Modeling time series is often associated with the process forecasts certain characteristics in the next period. One of the methods forecasts that developed nowadays is using artificial neural network or more popularly known as a neural network. Use neural network in forecasts time series can be a good solution, but the problem is network architecture and the training method in the right direction. One of the choices that might be using a genetic algorithm. A genetic algorithm is a search algorithm stochastic resonance based on how it works by the mechanisms of natural selection and genetic variation that aims to find a solution to a problem. This algorithm can be used as teaching methods in train models are sent back propagation neural network. The application genetic algorithm and neural network for divination time series aim to get the weight optimum. From the training and testing on the data index share price euro 50 obtained by the RMSE testing 27.8744 and 39.2852 RMSE training. The weight or parameters that produced by has reached an optimum level in second-generation 1000 with the best fitness and the average 0.027771 the fitness of 0.0027847.Model is good to be used to give a prediction that is quite accurate information that is shown by the close target with the output.
For three decades, many mathematical programming methods have been developed to solve optimization problems. However, until now, there has not been a single totally efficient and robust method to coverall optimization problems that arise in the different engineering fields.Most engineering application design problems involve the choice of design variable values that better describe the behaviour of a system.At the same time, those results should cover the requirements and specifications imposed by the norms for that system. This last condition leads to predicting what the entrance parameter values should be whose design results comply with the norms and also present good performance, which describes the inverse problem.Generally, in design problems the variables are discreet from the mathematical point of view. However, most mathematical optimization applications are focused and developed for continuous variables. Presently, there are many research articles about optimization methods; the typical ones are based on calculus,numerical methods, and random methods.
The calculus-based methods have been intensely studied and are subdivided in two main classes: 1) the direct search methods find a local maximum moving a function over the relative local gradient directions and 2) the indirect methods usually find the local ends solving a set of non-linear equations, resultant of equating the gradient from the object function to zero, i.e., by means of multidimensional generalization of the notion of the function’s extreme points from elementary calculus given smooth function without restrictions to find a possible maximum which is to be restricted to those points whose slope is zero in all directions. The real world has many discontinuities and noisy spaces, which is why it is not surprising that the methods depending upon the restrictive requirements of continuity and existence of a derivative, are unsuitable for all, but a very limited problem domain. A number of schemes have been applied in many forms and sizes. The idea is quite direct inside a finite search space or a discrete infinite search space, where the algorithms can locate the object function values in each space point one at a time. The simplicity of this kind of algorithm is very attractive when the numbers of possibilities are very small. Nevertheless, these outlines are often inefficient, since they do not complete the requirements of robustness in big or highly-dimensional spaces, making it quite a hard task to find the optimal values. Given the shortcomings of the calculus-based techniques and the numerical ones the random methods have increased their popularity.
This paper presents a set of methods that uses a genetic algorithm for automatic test-data generation in
software testing. For several years researchers have proposed several methods for generating test data
which had different drawbacks. In this paper, we have presented various Genetic Algorithm (GA) based test
methods which will be having different parameters to automate the structural-oriented test data generation
on the basis of internal program structure. The factors discovered are used in evaluating the fitness
function of Genetic algorithm for selecting the best possible Test method. These methods take the test
populations as an input and then evaluate the test cases for that program. This integration will help in
improving the overall performance of genetic algorithm in search space exploration and exploitation fields
with better convergence rate.
The importance of model fairness and interpretability in AI systemsFrancesca Lazzeri, PhD
Machine learning model fairness and interpretability are critical for data scientists, researchers and developers to explain their models and understand the value and accuracy of their findings. Interpretability is also important to debug machine learning models and make informed decisions about how to improve them.
In this session, Francesca will go over a few methods and tools that enable you to "unpack” machine learning models, gain insights into how and why they produce specific results, assess your AI systems fairness and mitigate any observed fairness issues.
Using open-source fairness and interpretability packages, attendees will learn how to:
- Explain model prediction by generating feature importance values for the entire model and/or individual data points.
- Achieve model interpretability on real-world datasets at scale, during training and inference.
- Use an interactive visualization dashboard to discover patterns in data and explanations at training time.
- Leverage additional interactive visualizations to assess which groups of users might be negatively impacted by a model and compare multiple models in terms of their fairness and performance.
QA Financial Forum London 2021 - Automation in Software Testing. Humans and C...Iosif Itkin
Speaker: Iosif Itkin, co-CEO & co-founder, Exactpro Systems
9th November 2021
Hilton Canary Wharf
Exactpro is an independent software testing business focused on mission-critical financial market infrastructures, primarily exchanges and clearing houses. In his presentation, Iosif will give a brief overview of research on the concept of model-based testing and the principal challenges of its application while testing complex distributed systems. He will also outline the broader context of interaction between humans and complex computer models.
Load Distribution Composite Design Pattern for Genetic Algorithm-Based Autono...ijsc
Current autonomic computing systems are ad hoc solutions that are designed and implemented from the scratch. When designing software, in most cases two or more patterns are to be composed to solve a bigger problem. A composite design patterns shows a synergy that makes the composition more than just the sum of its parts which leads to ready-made software architectures. As far as we know, there are no studies on composition of design patterns for autonomic computing domain. In this paper we propose pattern-oriented software architecture for self-optimization in autonomic computing system using design patterns composition and multi objective evolutionary algorithms that software designers and/or programmers can exploit to drive their work. Main objective of the system is to reduce the load in the server by distributing the population to clients. We used Case Based Reasoning, Database Access, and Master Slave design patterns. We evaluate the effectiveness of our architecture with and without design patterns compositions. The use of composite design patterns in the architecture and quantitative measurements are presented. A simple UML class diagram is used to describe the architecture.
LOAD DISTRIBUTION COMPOSITE DESIGN PATTERN FOR GENETIC ALGORITHM-BASED AUTONO...ijsc
Current autonomic computing systems are ad hoc solutions that are designed and implemented from the
scratch. When designing software, in most cases two or more patterns are to be composed to solve a bigger
problem. A composite design patterns shows a synergy that makes the composition more than just the sum
of its parts which leads to ready-made software architectures. As far as we know, there are no studies on
composition of design patterns for autonomic computing domain. In this paper we propose pattern-oriented
software architecture for self-optimization in autonomic computing system using design patterns
composition and multi objective evolutionary algorithms that software designers and/or programmers can
exploit to drive their work. Main objective of the system is to reduce the load in the server by distributing
the population to clients. We used Case Based Reasoning, Database Access, and Master Slave design
patterns. We evaluate the effectiveness of our architecture with and without design patterns compositions.
The use of composite design patterns in the architecture and quantitative measurements are presented. A
simple UML class diagram is used to describe the architecture.
ARTICLECooperating with machinesJacob W. Crandall 1, May.docxrossskuddershamus
ARTICLE
Cooperating with machines
Jacob W. Crandall 1, Mayada Oudah 2, Tennom3, Fatimah Ishowo-Oloko 2, Sherief Abdallah 4,5,
Jean-François Bonnefon6, Manuel Cebrian7, Azim Shariff8, Michael A. Goodrich 1 & Iyad Rahwan 7,9
Since Alan Turing envisioned artificial intelligence, technical progress has often been mea-
sured by the ability to defeat humans in zero-sum encounters (e.g., Chess, Poker, or Go). Less
attention has been given to scenarios in which human–machine cooperation is beneficial but
non-trivial, such as scenarios in which human and machine preferences are neither fully
aligned nor fully in conflict. Cooperation does not require sheer computational power, but
instead is facilitated by intuition, cultural norms, emotions, signals, and pre-evolved dis-
positions. Here, we develop an algorithm that combines a state-of-the-art reinforcement-
learning algorithm with mechanisms for signaling. We show that this algorithm can cooperate
with people and other algorithms at levels that rival human cooperation in a variety of two-
player repeated stochastic games. These results indicate that general human–machine
cooperation is achievable using a non-trivial, but ultimately simple, set of algorithmic
mechanisms.
DOI: 10.1038/s41467-017-02597-8 OPEN
1 Computer Science Department, Brigham Young University, 3361 TMCB, Provo, UT 84602, USA. 2 Khalifa University of Science and Technology, Masdar
Institute, P.O. Box 54224, Abu Dhabi, United Arab Emirates. 3 UVA Digital Himalaya Project, University of Virginia, Charlottesville, VA 22904, USA. 4 British
University in Dubai, Dubai, United Arab Emirates. 5 School of Informatics, University of Edinburgh, Edinburgh EH8 9AB, UK. 6 Toulouse School of Economics
(TSM-Research), Centre National de la Recherche Scientifique, University of Toulouse Capitole, Toulouse 31015, France. 7 The Media Lab, Massachusetts
Institute of Technology, Cambridge, MA 02139, USA. 8 Department of Psychology and Social Behavior, University of California, Irvine, CA 92697, USA.
9 Institute for Data, Systems and Society, Massachusetts Institute of Technology, 77 Massachusetts Avenue, Cambridge, MA 02139, USA. Correspondence
and requests for materials should be addressed to J.W.C. (email: [email protected]) or to I.R. (email: [email protected])
NATURE COMMUNICATIONS | (2018) 9:233 |DOI: 10.1038/s41467-017-02597-8 |www.nature.com/naturecommunications 1
12
3
4
5
6
7
8
9
0
()
:,;
http://orcid.org/0000-0002-5602-4146
http://orcid.org/0000-0002-5602-4146
http://orcid.org/0000-0002-5602-4146
http://orcid.org/0000-0002-5602-4146
http://orcid.org/0000-0002-5602-4146
http://orcid.org/0000-0003-3141-6159
http://orcid.org/0000-0003-3141-6159
http://orcid.org/0000-0003-3141-6159
http://orcid.org/0000-0003-3141-6159
http://orcid.org/0000-0003-3141-6159
http://orcid.org/0000-0003-3011-5047
http://orcid.org/0000-0003-3011-5047
http://orcid.org/0000-0003-3011-5047
http://orcid.org/0000-0003-3011-5047
http://orcid.org/0000-0003-3.
Introduction to monte-carlo analysis for software development - Troy Magennis...Troy Magennis
Forecasting and managing software development project risks & uncertainty. Monte-carlo analysis is the tool of choice for managing risk in many fields where risk is an inherent part of doing business. This paper examines how to use monte-carlo techniques to understand and leverage risk in Software Development projects and teams.
Synergy of Human and Artificial Intelligence in Software EngineeringTao Xie
Keynote Talk by Tao Xie at International NSF sponsored Workshop on Realizing Artificial Intelligence Synergies in Software Engineering (RAISE 2013) http://promisedata.org/raise/2013/
For three decades, many mathematical programming methods have been developed to solve optimization problems. However, until now, there has not been a single totally efficient and robust method to coverall optimization problems that arise in the different engineering fields.Most engineering application design problems involve the choice of design variable values that better describe the behaviour of a system.At the same time, those results should cover the requirements and specifications imposed by the norms for that system. This last condition leads to predicting what the entrance parameter values should be whose design results comply with the norms and also present good performance, which describes the inverse problem.Generally, in design problems the variables are discreet from the mathematical point of view. However, most mathematical optimization applications are focused and developed for continuous variables. Presently, there are many research articles about optimization methods; the typical ones are based on calculus,numerical methods, and random methods.
The calculus-based methods have been intensely studied and are subdivided in two main classes: 1) the direct search methods find a local maximum moving a function over the relative local gradient directions and 2) the indirect methods usually find the local ends solving a set of non-linear equations, resultant of equating the gradient from the object function to zero, i.e., by means of multidimensional generalization of the notion of the function’s extreme points from elementary calculus given smooth function without restrictions to find a possible maximum which is to be restricted to those points whose slope is zero in all directions. The real world has many discontinuities and noisy spaces, which is why it is not surprising that the methods depending upon the restrictive requirements of continuity and existence of a derivative, are unsuitable for all, but a very limited problem domain. A number of schemes have been applied in many forms and sizes. The idea is quite direct inside a finite search space or a discrete infinite search space, where the algorithms can locate the object function values in each space point one at a time. The simplicity of this kind of algorithm is very attractive when the numbers of possibilities are very small. Nevertheless, these outlines are often inefficient, since they do not complete the requirements of robustness in big or highly-dimensional spaces, making it quite a hard task to find the optimal values. Given the shortcomings of the calculus-based techniques and the numerical ones the random methods have increased their popularity.
This paper presents a set of methods that uses a genetic algorithm for automatic test-data generation in
software testing. For several years researchers have proposed several methods for generating test data
which had different drawbacks. In this paper, we have presented various Genetic Algorithm (GA) based test
methods which will be having different parameters to automate the structural-oriented test data generation
on the basis of internal program structure. The factors discovered are used in evaluating the fitness
function of Genetic algorithm for selecting the best possible Test method. These methods take the test
populations as an input and then evaluate the test cases for that program. This integration will help in
improving the overall performance of genetic algorithm in search space exploration and exploitation fields
with better convergence rate.
The importance of model fairness and interpretability in AI systemsFrancesca Lazzeri, PhD
Machine learning model fairness and interpretability are critical for data scientists, researchers and developers to explain their models and understand the value and accuracy of their findings. Interpretability is also important to debug machine learning models and make informed decisions about how to improve them.
In this session, Francesca will go over a few methods and tools that enable you to "unpack” machine learning models, gain insights into how and why they produce specific results, assess your AI systems fairness and mitigate any observed fairness issues.
Using open-source fairness and interpretability packages, attendees will learn how to:
- Explain model prediction by generating feature importance values for the entire model and/or individual data points.
- Achieve model interpretability on real-world datasets at scale, during training and inference.
- Use an interactive visualization dashboard to discover patterns in data and explanations at training time.
- Leverage additional interactive visualizations to assess which groups of users might be negatively impacted by a model and compare multiple models in terms of their fairness and performance.
QA Financial Forum London 2021 - Automation in Software Testing. Humans and C...Iosif Itkin
Speaker: Iosif Itkin, co-CEO & co-founder, Exactpro Systems
9th November 2021
Hilton Canary Wharf
Exactpro is an independent software testing business focused on mission-critical financial market infrastructures, primarily exchanges and clearing houses. In his presentation, Iosif will give a brief overview of research on the concept of model-based testing and the principal challenges of its application while testing complex distributed systems. He will also outline the broader context of interaction between humans and complex computer models.
Load Distribution Composite Design Pattern for Genetic Algorithm-Based Autono...ijsc
Current autonomic computing systems are ad hoc solutions that are designed and implemented from the scratch. When designing software, in most cases two or more patterns are to be composed to solve a bigger problem. A composite design patterns shows a synergy that makes the composition more than just the sum of its parts which leads to ready-made software architectures. As far as we know, there are no studies on composition of design patterns for autonomic computing domain. In this paper we propose pattern-oriented software architecture for self-optimization in autonomic computing system using design patterns composition and multi objective evolutionary algorithms that software designers and/or programmers can exploit to drive their work. Main objective of the system is to reduce the load in the server by distributing the population to clients. We used Case Based Reasoning, Database Access, and Master Slave design patterns. We evaluate the effectiveness of our architecture with and without design patterns compositions. The use of composite design patterns in the architecture and quantitative measurements are presented. A simple UML class diagram is used to describe the architecture.
LOAD DISTRIBUTION COMPOSITE DESIGN PATTERN FOR GENETIC ALGORITHM-BASED AUTONO...ijsc
Current autonomic computing systems are ad hoc solutions that are designed and implemented from the
scratch. When designing software, in most cases two or more patterns are to be composed to solve a bigger
problem. A composite design patterns shows a synergy that makes the composition more than just the sum
of its parts which leads to ready-made software architectures. As far as we know, there are no studies on
composition of design patterns for autonomic computing domain. In this paper we propose pattern-oriented
software architecture for self-optimization in autonomic computing system using design patterns
composition and multi objective evolutionary algorithms that software designers and/or programmers can
exploit to drive their work. Main objective of the system is to reduce the load in the server by distributing
the population to clients. We used Case Based Reasoning, Database Access, and Master Slave design
patterns. We evaluate the effectiveness of our architecture with and without design patterns compositions.
The use of composite design patterns in the architecture and quantitative measurements are presented. A
simple UML class diagram is used to describe the architecture.
ARTICLECooperating with machinesJacob W. Crandall 1, May.docxrossskuddershamus
ARTICLE
Cooperating with machines
Jacob W. Crandall 1, Mayada Oudah 2, Tennom3, Fatimah Ishowo-Oloko 2, Sherief Abdallah 4,5,
Jean-François Bonnefon6, Manuel Cebrian7, Azim Shariff8, Michael A. Goodrich 1 & Iyad Rahwan 7,9
Since Alan Turing envisioned artificial intelligence, technical progress has often been mea-
sured by the ability to defeat humans in zero-sum encounters (e.g., Chess, Poker, or Go). Less
attention has been given to scenarios in which human–machine cooperation is beneficial but
non-trivial, such as scenarios in which human and machine preferences are neither fully
aligned nor fully in conflict. Cooperation does not require sheer computational power, but
instead is facilitated by intuition, cultural norms, emotions, signals, and pre-evolved dis-
positions. Here, we develop an algorithm that combines a state-of-the-art reinforcement-
learning algorithm with mechanisms for signaling. We show that this algorithm can cooperate
with people and other algorithms at levels that rival human cooperation in a variety of two-
player repeated stochastic games. These results indicate that general human–machine
cooperation is achievable using a non-trivial, but ultimately simple, set of algorithmic
mechanisms.
DOI: 10.1038/s41467-017-02597-8 OPEN
1 Computer Science Department, Brigham Young University, 3361 TMCB, Provo, UT 84602, USA. 2 Khalifa University of Science and Technology, Masdar
Institute, P.O. Box 54224, Abu Dhabi, United Arab Emirates. 3 UVA Digital Himalaya Project, University of Virginia, Charlottesville, VA 22904, USA. 4 British
University in Dubai, Dubai, United Arab Emirates. 5 School of Informatics, University of Edinburgh, Edinburgh EH8 9AB, UK. 6 Toulouse School of Economics
(TSM-Research), Centre National de la Recherche Scientifique, University of Toulouse Capitole, Toulouse 31015, France. 7 The Media Lab, Massachusetts
Institute of Technology, Cambridge, MA 02139, USA. 8 Department of Psychology and Social Behavior, University of California, Irvine, CA 92697, USA.
9 Institute for Data, Systems and Society, Massachusetts Institute of Technology, 77 Massachusetts Avenue, Cambridge, MA 02139, USA. Correspondence
and requests for materials should be addressed to J.W.C. (email: [email protected]) or to I.R. (email: [email protected])
NATURE COMMUNICATIONS | (2018) 9:233 |DOI: 10.1038/s41467-017-02597-8 |www.nature.com/naturecommunications 1
12
3
4
5
6
7
8
9
0
()
:,;
http://orcid.org/0000-0002-5602-4146
http://orcid.org/0000-0002-5602-4146
http://orcid.org/0000-0002-5602-4146
http://orcid.org/0000-0002-5602-4146
http://orcid.org/0000-0002-5602-4146
http://orcid.org/0000-0003-3141-6159
http://orcid.org/0000-0003-3141-6159
http://orcid.org/0000-0003-3141-6159
http://orcid.org/0000-0003-3141-6159
http://orcid.org/0000-0003-3141-6159
http://orcid.org/0000-0003-3011-5047
http://orcid.org/0000-0003-3011-5047
http://orcid.org/0000-0003-3011-5047
http://orcid.org/0000-0003-3011-5047
http://orcid.org/0000-0003-3.
Introduction to monte-carlo analysis for software development - Troy Magennis...Troy Magennis
Forecasting and managing software development project risks & uncertainty. Monte-carlo analysis is the tool of choice for managing risk in many fields where risk is an inherent part of doing business. This paper examines how to use monte-carlo techniques to understand and leverage risk in Software Development projects and teams.
Synergy of Human and Artificial Intelligence in Software EngineeringTao Xie
Keynote Talk by Tao Xie at International NSF sponsored Workshop on Realizing Artificial Intelligence Synergies in Software Engineering (RAISE 2013) http://promisedata.org/raise/2013/
An introduction to software engineering, based on the first chapter of "A (Partial) Introduction to Software Engineering
Practices and Methods" By Laurie Williams
Top 7 Unique WhatsApp API Benefits | Saudi ArabiaYara Milbes
Discover the transformative power of the WhatsApp API in our latest SlideShare presentation, "Top 7 Unique WhatsApp API Benefits." In today's fast-paced digital era, effective communication is crucial for both personal and professional success. Whether you're a small business looking to enhance customer interactions or an individual seeking seamless communication with loved ones, the WhatsApp API offers robust capabilities that can significantly elevate your experience.
In this presentation, we delve into the top 7 distinctive benefits of the WhatsApp API, provided by the leading WhatsApp API service provider in Saudi Arabia. Learn how to streamline customer support, automate notifications, leverage rich media messaging, run scalable marketing campaigns, integrate secure payments, synchronize with CRM systems, and ensure enhanced security and privacy.
Software Engineering, Software Consulting, Tech Lead.
Spring Boot, Spring Cloud, Spring Core, Spring JDBC, Spring Security,
Spring Transaction, Spring MVC,
Log4j, REST/SOAP WEB-SERVICES.
Unleash Unlimited Potential with One-Time Purchase
BoxLang is more than just a language; it's a community. By choosing a Visionary License, you're not just investing in your success, you're actively contributing to the ongoing development and support of BoxLang.
First Steps with Globus Compute Multi-User EndpointsGlobus
In this presentation we will share our experiences around getting started with the Globus Compute multi-user endpoint. Working with the Pharmacology group at the University of Auckland, we have previously written an application using Globus Compute that can offload computationally expensive steps in the researcher's workflows, which they wish to manage from their familiar Windows environments, onto the NeSI (New Zealand eScience Infrastructure) cluster. Some of the challenges we have encountered were that each researcher had to set up and manage their own single-user globus compute endpoint and that the workloads had varying resource requirements (CPUs, memory and wall time) between different runs. We hope that the multi-user endpoint will help to address these challenges and share an update on our progress here.
Navigating the Metaverse: A Journey into Virtual Evolution"Donna Lenk
Join us for an exploration of the Metaverse's evolution, where innovation meets imagination. Discover new dimensions of virtual events, engage with thought-provoking discussions, and witness the transformative power of digital realms."
Exploring Innovations in Data Repository Solutions - Insights from the U.S. G...Globus
The U.S. Geological Survey (USGS) has made substantial investments in meeting evolving scientific, technical, and policy driven demands on storing, managing, and delivering data. As these demands continue to grow in complexity and scale, the USGS must continue to explore innovative solutions to improve its management, curation, sharing, delivering, and preservation approaches for large-scale research data. Supporting these needs, the USGS has partnered with the University of Chicago-Globus to research and develop advanced repository components and workflows leveraging its current investment in Globus. The primary outcome of this partnership includes the development of a prototype enterprise repository, driven by USGS Data Release requirements, through exploration and implementation of the entire suite of the Globus platform offerings, including Globus Flow, Globus Auth, Globus Transfer, and Globus Search. This presentation will provide insights into this research partnership, introduce the unique requirements and challenges being addressed and provide relevant project progress.
Providing Globus Services to Users of JASMIN for Environmental Data AnalysisGlobus
JASMIN is the UK’s high-performance data analysis platform for environmental science, operated by STFC on behalf of the UK Natural Environment Research Council (NERC). In addition to its role in hosting the CEDA Archive (NERC’s long-term repository for climate, atmospheric science & Earth observation data in the UK), JASMIN provides a collaborative platform to a community of around 2,000 scientists in the UK and beyond, providing nearly 400 environmental science projects with working space, compute resources and tools to facilitate their work. High-performance data transfer into and out of JASMIN has always been a key feature, with many scientists bringing model outputs from supercomputers elsewhere in the UK, to analyse against observational or other model data in the CEDA Archive. A growing number of JASMIN users are now realising the benefits of using the Globus service to provide reliable and efficient data movement and other tasks in this and other contexts. Further use cases involve long-distance (intercontinental) transfers to and from JASMIN, and collecting results from a mobile atmospheric radar system, pushing data to JASMIN via a lightweight Globus deployment. We provide details of how Globus fits into our current infrastructure, our experience of the recent migration to GCSv5.4, and of our interest in developing use of the wider ecosystem of Globus services for the benefit of our user community.
Enterprise Resource Planning System includes various modules that reduce any business's workload. Additionally, it organizes the workflows, which drives towards enhancing productivity. Here are a detailed explanation of the ERP modules. Going through the points will help you understand how the software is changing the work dynamics.
To know more details here: https://blogs.nyggs.com/nyggs/enterprise-resource-planning-erp-system-modules/
Developing Distributed High-performance Computing Capabilities of an Open Sci...Globus
COVID-19 had an unprecedented impact on scientific collaboration. The pandemic and its broad response from the scientific community has forged new relationships among public health practitioners, mathematical modelers, and scientific computing specialists, while revealing critical gaps in exploiting advanced computing systems to support urgent decision making. Informed by our team’s work in applying high-performance computing in support of public health decision makers during the COVID-19 pandemic, we present how Globus technologies are enabling the development of an open science platform for robust epidemic analysis, with the goal of collaborative, secure, distributed, on-demand, and fast time-to-solution analyses to support public health.
Launch Your Streaming Platforms in MinutesRoshan Dwivedi
The claim of launching a streaming platform in minutes might be a bit of an exaggeration, but there are services that can significantly streamline the process. Here's a breakdown:
Pros of Speedy Streaming Platform Launch Services:
No coding required: These services often use drag-and-drop interfaces or pre-built templates, eliminating the need for programming knowledge.
Faster setup: Compared to building from scratch, these platforms can get you up and running much quicker.
All-in-one solutions: Many services offer features like content management systems (CMS), video players, and monetization tools, reducing the need for multiple integrations.
Things to Consider:
Limited customization: These platforms may offer less flexibility in design and functionality compared to custom-built solutions.
Scalability: As your audience grows, you might need to upgrade to a more robust platform or encounter limitations with the "quick launch" option.
Features: Carefully evaluate which features are included and if they meet your specific needs (e.g., live streaming, subscription options).
Examples of Services for Launching Streaming Platforms:
Muvi [muvi com]
Uscreen [usencreen tv]
Alternatives to Consider:
Existing Streaming platforms: Platforms like YouTube or Twitch might be suitable for basic streaming needs, though monetization options might be limited.
Custom Development: While more time-consuming, custom development offers the most control and flexibility for your platform.
Overall, launching a streaming platform in minutes might not be entirely realistic, but these services can significantly speed up the process compared to building from scratch. Carefully consider your needs and budget when choosing the best option for you.
How Recreation Management Software Can Streamline Your Operations.pptxwottaspaceseo
Recreation management software streamlines operations by automating key tasks such as scheduling, registration, and payment processing, reducing manual workload and errors. It provides centralized management of facilities, classes, and events, ensuring efficient resource allocation and facility usage. The software offers user-friendly online portals for easy access to bookings and program information, enhancing customer experience. Real-time reporting and data analytics deliver insights into attendance and preferences, aiding in strategic decision-making. Additionally, effective communication tools keep participants and staff informed with timely updates. Overall, recreation management software enhances efficiency, improves service delivery, and boosts customer satisfaction.
GraphSummit Paris - The art of the possible with Graph TechnologyNeo4j
Sudhir Hasbe, Chief Product Officer, Neo4j
Join us as we explore breakthrough innovations enabled by interconnected data and AI. Discover firsthand how organizations use relationships in data to uncover contextual insights and solve our most pressing challenges – from optimizing supply chains, detecting fraud, and improving customer experiences to accelerating drug discoveries.
Globus Compute wth IRI Workflows - GlobusWorld 2024Globus
As part of the DOE Integrated Research Infrastructure (IRI) program, NERSC at Lawrence Berkeley National Lab and ALCF at Argonne National Lab are working closely with General Atomics on accelerating the computing requirements of the DIII-D experiment. As part of the work the team is investigating ways to speedup the time to solution for many different parts of the DIII-D workflow including how they run jobs on HPC systems. One of these routes is looking at Globus Compute as a way to replace the current method for managing tasks and we describe a brief proof of concept showing how Globus Compute could help to schedule jobs and be a tool to connect compute at different facilities.
Software Engineering, Software Consulting, Tech Lead, Spring Boot, Spring Cloud, Spring Core, Spring JDBC, Spring Transaction, Spring MVC, OpenShift Cloud Platform, Kafka, REST, SOAP, LLD & HLD.
Climate Science Flows: Enabling Petabyte-Scale Climate Analysis with the Eart...Globus
The Earth System Grid Federation (ESGF) is a global network of data servers that archives and distributes the planet’s largest collection of Earth system model output for thousands of climate and environmental scientists worldwide. Many of these petabyte-scale data archives are located in proximity to large high-performance computing (HPC) or cloud computing resources, but the primary workflow for data users consists of transferring data, and applying computations on a different system. As a part of the ESGF 2.0 US project (funded by the United States Department of Energy Office of Science), we developed pre-defined data workflows, which can be run on-demand, capable of applying many data reduction and data analysis to the large ESGF data archives, transferring only the resultant analysis (ex. visualizations, smaller data files). In this talk, we will showcase a few of these workflows, highlighting how Globus Flows can be used for petabyte-scale climate analysis.
Climate Science Flows: Enabling Petabyte-Scale Climate Analysis with the Eart...
Game-theoretic Analysis of Development Practices: Challenges and Opportunities
1. Game-Theoretic Analysis of
Development Practices:
Challenges and Opportunities
Carlos Gavidia-Calderon,
Federica Sarro,
Mark Harman
and Earl T. Barr
The Journal of Systems and Software, 2019
2. "All the really stressful times for me
have been about process. They
haven't been about code. When
code doesn't work, that can actually
be exciting. Process problems are a
pain in the ***. You never, ever want
to have process problems ... That's
when people start getting really
angry at each other.”
Linus Torvalds (from The Register)
3. Roger Myerson. 2007 Nobel Memorial Prize in Economic Sciences)
“Game theory can be
defined as the study of
mathematical models of
conflict and cooperation
between intelligent
rational decision-
makers.”
6. GTPI (“get pie”; Game-theoretic Software Process Improvement)
An end-to-end software process improvement approach based on game-theoretic models.
7. The Budget Protection Issue
“(The) software item was in dire
need of a fix. … a fix was
estimated at about 20 person-
days. The … team instead chose
to internally develop … a cheap
patch which could be done for
about five person-days”
8. Empirical Game-Theoretic Analysis (EGTA)
Full-games are reduced to the normal form, where payoff values are obtained via simulation for a subset
of strategies.
9. A Process Simulator of the Budget Protection Issue
Kludges are fast, more likely to rework and deteriorate the codebase. Fixes are slow, less prone to rework and do
not harm the codebase.
10. Validating of the Simulation Model
Verifying the simulation model’s ability to predict the behavior of the real system.
12. Adopting Automatic Code Analysis
We have increased the probability of rework for kludges, from 1.05 R to 2 R.
13. Nash Equilibria after Automatic Code Analysis
In 2 out of 3 equilibria, the kludge-intensive behavior has a significant probability.
14. Adopting Code Review
We have increased the probability of rework for kludges, from 2 R to 5 R.
15. Nash Equilibria after Code Review
Both Developers adopt the same strategy: Fix-Intensive with 100% probability.
16. The Assessor's Dilemma:
Improving Bug Repair via
Empirical Game Theory
Carlos Gavidia-Calderon,
Federica Sarro,
Mark Harman
and Earl T. Barr
IEEE Transactions on Software Engineering 2019
Editor's Notes
My name is Carlos Gavidia. I recently graduated as PhD. in Software Engineering from University College London, under the supervision of Earl Barr, Federica Sarro and Mark Harman. Today, I'll be presenting the paper we published in the "New Trends and Ideas" track of the Journal of Systems and Software.
Software needs to scale to support an increasing number of users; and software development processes need to scale to support large distributed teams, operating over complex codebases. Let's take the Linux kernel as an example: it has more than 13 000 contributors since 2015, adding around 10 000 lines of code daily. When operating at this scale, process problems can take precedence over technical ones, as seen in this quote from Linus Torvalds.
The most widely adopted software processes come from the practitioner community, and are based on decades of software engineering experience. In our paper, we complement this empirical approach by including mathematical models in the process improvement effort.
When we mention mathematical models, we are referring specifically to game-theoretic ones. Roger Myerson, in the slide, defines game theory as the "study of mathematical models of conflict and cooperation". It deals with scenarios where rational-self-interested agents interact, and these interactions affect the agent’s welfare. In game theory, these scenarios are called games, and the agents are called players. The game definition applies to card games, like poker, but also to financial markets and international relations. Game theory can help us understand, and even predict, how players would behave when engaging in a game.
Let's use an example to show how to apply game theory to a software development context. An organisation hires two freelance developers to build a software system. Bob, who is in charge of developing the web frontend, and Alice, who needs to develop the backend service. They both receive $50 upfront, and an additional $50 when they deliver their component.
For the system to be finished the organisation also needs a REST API that manages communication between the backend and the frontend. The development of this API requires the expertise of both Bob and Alice. When the API is ready, both developers would receive a bonus of 50$.
Also, if either Bob and Alice have time, they can pursue additional freelancing contracts for a value of $100.
Now let's build a game-theoretic model of this scenario.
We consider Alice and Bob as the players. For the sake of simplicity, let's limit the actions they can perform to: 1) cooperate, represented by an upwards arrow, and 2) to not cooperate, represented by a downwards arrow. In this game, by cooperation we mean a disposition to work together.
The payoff table in this slide contains the payoff per player given the actions they perform. For example, the top-left cell corresponds to the scenario where Alice and Bob cooperate. In that case, the system goes live and each receive $150. When neither cooperate, in the bottom-right cell, they deliver their corresponding component without finishing the integration via the API, so each receive $100. When one freelancer cooperates but the other doesn't, the cooperating developer is not able to finish their component obtaining only the initial $50, while the non-cooperating developer finishes their component and even has time to take an additional contract, pocketing $200.
Behaviour in game-theoretic models is defined in terms of strategies, where a strategy assigns a probability to each action. Game theory provides insights on how rational players would behave in a game. At an optimal outcome, players adopt a strategy such that there are no incentives for deviating. We can obtain that outcome, also called the Nash Equilibrium, processing the payoff table with an equilibrium algorithm. For this example, at equilibrium both Alice and Bob adopt the same strategy: to not cooperate with a probability of 100%. At equilibrium, both freelancers obtain $100 with no incentives for deviation, since moving to cooperation would diminish their earnings in $50.
This outcome is not good for the organisation, since the system is not finished. It is also not good for the freelancers. If both cooperate, besides a happy client they obtain more money. That's the dilemma behind this freelancer game: although the organisation, and the freelancers, would be in a better position if they cooperate, the software process behind the contract forces them to abandon cooperation.
We believe that many software processes suffer from a similar problem, converging towards unwanted behaviour at equilibrium. To address this issue, in our paper we propose GTPI: a software process improvement approach based on game-theoretic models.
GTPI stands for game-theoretic process improvement and is composed of 4 steps. In the first one, we identify a process anomaly, like the freelancers not cooperating. In step 2, we built a game-theoretic model of the process to improve, like the payoff table in the previous slide. Having the model ready, we can obtain its Nash equilibrium and see if it matches the process anomaly identified in step one. If that is the case, in step 3 we can use the game-theoretic model to experiment with process interventions. Once we have found an adequate process intervention, meaning its model shows the desired behaviour at equilibrium, we proceed to the last step of GTPI and deploy and adopt the improved process.
Next, let's explore how to use GTPI by addressing a software process problem reported by Lavallée and Robillard in their ICSE 2015 paper. In their paper, they describe a development team that found a problem in a software system. Building a permanent fix for this problem would take 20-person days, but building a temporal workaround would demand 5 person-days. To avoid going over budget, this team opted for the workaround.
The authors found that 12 other teams have faced the same problem before, and all those teams also choose for the workaround instead of the permanent fix. This scenario, which the author’s called the budget protection issue, is problematic since developing a permanent fix is cheaper than developing 12 workarounds.
Now that in step 1 we have identified a process anomaly, the budget protection issue, we can move to the empirical game design step. Game representations, like the payoff table, grow exponentially in size with the number of actions and players. In the Lavallee and Robillard paper, they observed 45 people distributed in 13 teams for around 10 months, so we need abstraction to keep a manageable game size.
To this purpose, we propose to adopt Empirical Game-Theoretic Analysis, or EGTA. The reduced games produced by EGTA are also payoff tables, with the payoff values obtained via simulation. In EGTA models, the actions are limited to a set of strategies of interest.
For our model of the budget protection issue, we consider the two developers as players, with a payoff as the number of features delivered per release. Our model has only two strategies: 1) in a fix-intensive strategy, developers commit proper fixes until a week before the release, when they switch to commit kludges, or workarounds and 2) a kludge-intensive strategy, where a developer switch to committing kludges when work items start accumulating. In the slide, for the table cell corresponding to a fix-intensive strategy against a kludge-intensive strategy, we use the process simulator to obtain the number of features delivered per developer. Over multiple iterations so we can calculate the averages.
We select these two strategies arbitrarily for demonstration purposes. In a real setting, they should come from a dataset of the process to model. Building a process dataset is not trivial, considering that data might just not be there. For the budget protection issue, the code repository can be a source of commit behaviour. But then we would need a way to differentiate commits for proper fixes from commits for kludges. As a proxy, we can use a static analysis tool like FindBugs over commits. We can even apply NLP techniques to code review comments, or just ask the Tech Lead which behaviours they believe are relevant. Assembling the process dataset is an essential requisite for GTPI adoption.
Now let's review the simulation model we use to obtain payoff values. Work items arrive at a given day according to I, where they can be picked by any developer available. The time developers spend on a work item depends if they choose to address it with a fix or a kludge. We want to reflect that kludges are faster to code than fixes, so fixes take 10% more time than T, the average resolution time; while kludges take 25% less than T. We expect kludges to be more likely to require rework, like bug fixing; so their probability of rework is 5% more than R, the average rework probability. For proper fixes, this probability is 10% lower than R. Kludges have a negative impact on codebase quality, so every time a kludge is committed the average resolution time increases in 5%.
In a 2016 paper by Mi and Keung, they reported in an Eclipse Platform annual release the average resolution time T is around 30 days, with an R of 7% for bug reopening. We plugged these values in the simulator when obtaining payoff values.
A step we skipped that is important when applying GTPI is simulation model validation. We need to be certain that the simulation actually reflects the process to improve. There's an extensive literature in software process simulation; here in the slide, we show one of the many validation approaches.
We split the process dataset into 3 parts: training, validation and testing. We use the training dataset to obtain simulation parameters, like the values of R, T, and I. The validation dataset is used for model calibration. Let's say that, to ensure accurate payoff values, we want our simulation to predict the average number of features delivered per release. We obtain an estimate from the simulation model and compare it with the features released in the validation dataset. If they do not match, we would then need to improve the simulation design. We use the testing dataset for a final verification: using the simulation we obtain multiple samples of a target measure, like the number of features, and then compare if they match what’s observed in the testing dataset. This comparison can be done via hypothesis testing, confidence intervals or even expert opinion.
When the simulation model is ready, we can use it to populate the payoff table. After feeding the payoff table to a game solver like Gambit, we see that, at equilibrium, both developers do the same: to adopt the kludge-intensive strategy with 100% probability, matching what was reported by Lavallee and Robillard.
We see that this payoff table shares similarities with the model generated for the freelancer's dilemma. Both developers would deliver more features if they adopted the fix-intensive strategy, but this strategy is absent at equilibrium.
Now that we confirmed the software process anomaly, we can move the third step of GTPI. We will use the game-theoretic model to explore potential solutions.
An initial attempt would be to make kludges more expensive by making them more likely to require rework. We can try adopting post-commit analysis with an automatic tool like FindBugs, that can detect problematic code. Let's assume that adopting such a tool would increase the rework probability for kludges, from 1.05 R to 2 R.
We built a new payoff table using the updated simulation model and used Gambit to obtain the behaviour at equilibrium. Payoff values show the kludge-intensive strategy is now producing fewer features per release. Now we have 3 equilibria instead of 1: while in one we have a 100% probability for the fix-intensive strategy, as desired, in the other two the kludge-intensive strategy is still very dominant. Adopting automatic code analysis is beneficial, but we believe we can do better.
Given the promising results obtained by increasing the cost of kludges, let's try to go a bit further. If besides automatic code analysis we include a code review made by an actual engineer, let's assume the probability of rework for kludges increases from 2 R to 5 R.
Here's the new payoff table using the updated simulation model. The equilibrium obtained is now aligned with what the organisation wants: both developers adopt the fix-intensive strategy with a probability of 100%. Also, while in the original process we obtained 18.12 features per release, in the new process at equilibrium we are delivering 19.58 features, an increase of 8% while keeping a healthy codebase. After evaluating this candidate process in the model, we now have some confidence to actually deploy it to the team.
In our JSS paper the goal was to introduce GTPI and show how we use game-theory to reason formally about software processes. In a later publication in the Transactions on Software Engineering, we use GTPI to improve the prioritisation of software tasks. Using game-theoretic models, we show that industry practices like bug triage are not effective, and we propose a new reputation-based process with truthful prioritisation at equilibrium. If you're interested in this topic, please read this work.
Besides the budget-protection issue and task prioritisation, we believe there are many other process problems that can be tackled with game-theory. We invite you to use GTPI to improve software processes at your organisation. Thank you very much for your time, and now I'm ready for questions.