This document summarizes an article that studied the effects of global variables on program dependence and dependence clusters. It introduces an algorithm to measure the impact of a global variable by transforming the program to remove dependencies related to that variable. The algorithm was applied to 21 programs totaling over 50,000 lines of code to analyze the effects of 849 global variables. The results showed that over half the programs had global variables that significantly impacted overall dependence, and a quarter of programs had a global variable solely responsible for a dependence cluster. The study provides insights into how and why global variables can lead to dependence clusters.
130404 fehmi jaafar - on the relationship between program evolution and fau...Ptidej Team
This document summarizes an empirical study that examined the relationship between class evolution patterns (such as class lifetime and co-evolution with other classes) and fault-proneness in software programs. The study analyzed several versions of three Java programs to classify classes based on their evolution profiles and identify faults. The results showed that persistent classes that existed throughout the program lifespan were significantly less fault-prone than short-lived or transient classes. Additionally, faults fixed by maintaining co-evolved classes that changed together were significantly more than faults fixed in other classes. These findings provide insights into how class evolution impacts faults that can help improve fault detection and prevention.
This document discusses relating the evolution of classes in object-oriented programs to their fault-proneness. It classifies classes based on their lifetimes (persistent, short-lived, transient) and whether they co-evolve. An empirical study found persistent classes are significantly less fault-prone, while faults involving co-evolved classes are significantly more common. The results provide insight into how class evolution impacts faults and how to improve fault detection and prevention.
This document provides an overview of different frameworks and technologies for linking models, data, and tools for integrated environmental modeling. It begins with definitions of key concepts like architecture, component, interface, and coupling. It then provides a brief alphabetical description of 8 major modeling frameworks: Common Component Architecture (CCA), Earth System Modeling Framework (ESMF), Framework for Risk Analysis of Multi-Media Environmental Systems (FRAMES), High Level Architecture (HLA), Kepler, Model Coupling Toolkit (MCT), and OASIS/PALM. The frameworks differ in their approaches but also complement each other to some degree. The document aims to understand how and why the various approaches address conflicting demands like generality, flexibility, ease of use
A methodology to evaluate object oriented software systems using change requi...ijseajournal
It is a well known fact that software maintenance plays a major role and finds importance in software
development life cycle. As object
-
oriented programming has become the standard, it is very important to
understand th
e problems of maintaining object
-
oriented software systems. This paper aims at evaluating
object
-
oriented software system through change requirement traceability
–
based impact analysis
methodology
for non functional requirements using functional requirem
ents
. The major issues have been
related to change impact algorithms and inheritance of functionality.
This document discusses and compares several agent-assisted methodologies for developing multi-agent systems:
- It reviews Gaia, HLIM, PASSI, and Tropos methodologies, outlining their key models and phases. Gaia focuses on analysis and design, HLIM models internal and external agent behavior, and PASSI and Tropos incorporate UML modeling.
- It then proposes a new MAB methodology intended to address shortcomings of existing approaches. MAB includes requirements, analysis, design, and implementation phases and models such as use case maps and agent roles.
- Finally, it concludes that agent technologies represent a promising approach for developing complex software systems, but that matching methodologies to problem domains and developing princip
The document presents a changeability evaluation model for object-oriented software. It begins with an introduction to changeability and its importance. It then reviews existing literature on measuring changeability. A relationship is established between changeability and object-oriented design properties like coupling, inheritance, and polymorphism. The paper then develops a changeability evaluation model using multiple linear regression. The model relates changeability as the dependent variable to design properties as independent variables. The model is validated using experimental tests on data from class diagrams, which show the model is highly significant.
The document presents a changeability evaluation model for object-oriented software. It begins with an introduction to changeability and its importance. It then reviews existing literature on measuring changeability. A relationship is established between changeability and object-oriented design properties like coupling, inheritance, and polymorphism. The paper then develops a changeability evaluation model using multiple linear regression. The model relates changeability as the dependent variable to object-oriented design metrics as independent variables. The model is validated experimentally using data from class diagrams, showing it is highly significant.
This document presents a preliminary ontology to identify factors that influence software maintenance. The purpose of the ontology is to provide context for empirical studies of maintenance and help understand contradictory results. The ontology identifies key factors such as the maintained product, maintenance activities, maintenance organization and process, and maintenance engineers. It presents these factors using a UML model and discusses how variations in the factors could impact empirical studies of maintenance productivity, quality or efficiency. Two common maintenance scenarios are also described to demonstrate how the ontology can be used to characterize differences between scenarios.
130404 fehmi jaafar - on the relationship between program evolution and fau...Ptidej Team
This document summarizes an empirical study that examined the relationship between class evolution patterns (such as class lifetime and co-evolution with other classes) and fault-proneness in software programs. The study analyzed several versions of three Java programs to classify classes based on their evolution profiles and identify faults. The results showed that persistent classes that existed throughout the program lifespan were significantly less fault-prone than short-lived or transient classes. Additionally, faults fixed by maintaining co-evolved classes that changed together were significantly more than faults fixed in other classes. These findings provide insights into how class evolution impacts faults that can help improve fault detection and prevention.
This document discusses relating the evolution of classes in object-oriented programs to their fault-proneness. It classifies classes based on their lifetimes (persistent, short-lived, transient) and whether they co-evolve. An empirical study found persistent classes are significantly less fault-prone, while faults involving co-evolved classes are significantly more common. The results provide insight into how class evolution impacts faults and how to improve fault detection and prevention.
This document provides an overview of different frameworks and technologies for linking models, data, and tools for integrated environmental modeling. It begins with definitions of key concepts like architecture, component, interface, and coupling. It then provides a brief alphabetical description of 8 major modeling frameworks: Common Component Architecture (CCA), Earth System Modeling Framework (ESMF), Framework for Risk Analysis of Multi-Media Environmental Systems (FRAMES), High Level Architecture (HLA), Kepler, Model Coupling Toolkit (MCT), and OASIS/PALM. The frameworks differ in their approaches but also complement each other to some degree. The document aims to understand how and why the various approaches address conflicting demands like generality, flexibility, ease of use
A methodology to evaluate object oriented software systems using change requi...ijseajournal
It is a well known fact that software maintenance plays a major role and finds importance in software
development life cycle. As object
-
oriented programming has become the standard, it is very important to
understand th
e problems of maintaining object
-
oriented software systems. This paper aims at evaluating
object
-
oriented software system through change requirement traceability
–
based impact analysis
methodology
for non functional requirements using functional requirem
ents
. The major issues have been
related to change impact algorithms and inheritance of functionality.
This document discusses and compares several agent-assisted methodologies for developing multi-agent systems:
- It reviews Gaia, HLIM, PASSI, and Tropos methodologies, outlining their key models and phases. Gaia focuses on analysis and design, HLIM models internal and external agent behavior, and PASSI and Tropos incorporate UML modeling.
- It then proposes a new MAB methodology intended to address shortcomings of existing approaches. MAB includes requirements, analysis, design, and implementation phases and models such as use case maps and agent roles.
- Finally, it concludes that agent technologies represent a promising approach for developing complex software systems, but that matching methodologies to problem domains and developing princip
The document presents a changeability evaluation model for object-oriented software. It begins with an introduction to changeability and its importance. It then reviews existing literature on measuring changeability. A relationship is established between changeability and object-oriented design properties like coupling, inheritance, and polymorphism. The paper then develops a changeability evaluation model using multiple linear regression. The model relates changeability as the dependent variable to design properties as independent variables. The model is validated using experimental tests on data from class diagrams, which show the model is highly significant.
The document presents a changeability evaluation model for object-oriented software. It begins with an introduction to changeability and its importance. It then reviews existing literature on measuring changeability. A relationship is established between changeability and object-oriented design properties like coupling, inheritance, and polymorphism. The paper then develops a changeability evaluation model using multiple linear regression. The model relates changeability as the dependent variable to object-oriented design metrics as independent variables. The model is validated experimentally using data from class diagrams, showing it is highly significant.
This document presents a preliminary ontology to identify factors that influence software maintenance. The purpose of the ontology is to provide context for empirical studies of maintenance and help understand contradictory results. The ontology identifies key factors such as the maintained product, maintenance activities, maintenance organization and process, and maintenance engineers. It presents these factors using a UML model and discusses how variations in the factors could impact empirical studies of maintenance productivity, quality or efficiency. Two common maintenance scenarios are also described to demonstrate how the ontology can be used to characterize differences between scenarios.
Classes that exhibit similar evolution profiles or change patterns may be interdependent. This document analyzes several software artifact evolution patterns and their relationship to faults, including:
1) Asynchronous and dephased change patterns between files were found to exist in several systems and can help with change propagation, team management, and fault understanding.
2) Classes with short or transient lifetimes were found to be more fault-prone than persistent classes.
3) Classes that co-evolved or shared the same evolution profile were not found to be significantly more fault-prone.
4) Classes with static dependencies or that co-changed with classes exhibiting certain anti-patterns like Blob or ComplexClass patterns were found
EVALUATION AND STUDY OF SOFTWARE DEGRADATION IN THE EVOLUTION OF SIX VERSIONS...csandit
This document discusses evaluating software degradation through six versions of the open-source software JHotDraw. Data was collected dynamically from each version using AspectJ to derive two metrics: entropy and software maturity index. Entropy measures complexity and was expected to decrease as the software evolved from adding new features. The software maturity index was derived from documentation and aimed to find if entropy decreases most when maturity is highest, implying a relationship between the two. Six versions of JHotDraw released over five years were tested to investigate changes in these metrics and degradation as the software evolved through new versions.
Lectura 2.2 the roleofontologiesinemergnetmiddlewareMatias Menendez
The document discusses the role of ontologies in supporting emergent middleware. Emergent middleware is dynamically generated distributed system infrastructure that enables interoperability in complex distributed systems.
Ontologies play a key role by providing meaning and reasoning capabilities to allow the right runtime choices to be made. They support various functions throughout an emergent middleware architecture, including discovery, composition, and mediation. Two experiments provide initial evidence of ontologies' potential role in middleware by enabling semantic matching and process mediation. However, challenges remain around generating ontologies and addressing interoperability between heterogeneous ontologies.
journal of object technology - context oriented programmingBoni
This document introduces Context-oriented Programming (COP) as a new programming technique to enable context-dependent computation. COP treats context explicitly and provides mechanisms to dynamically adapt behavior in reaction to context changes at runtime. The document discusses the motivation for COP, defines context and how it can influence behavior, and provides examples of COP implementations in different programming languages. COP aims to bring the same degree of dynamicity to behavioral variations as object-oriented programming brought to polymorphism.
IJCER (www.ijceronline.com) International Journal of computational Engineerin...ijceronline
This document provides a selective survey of software reliability models. It discusses both static models used in early development stages and dynamic models used later. For static models, it describes a phase-based model and predictive development life cycle model. For dynamic models, it outlines reliability growth models, including binomial, Poisson, and other classes. It also presents a case study of incorporating code changes as a covariate into reliability modeling during testing of a large telecommunications system. The document concludes by advocating for wider use of statistical software reliability models to improve development and testing processes.
IJCER (www.ijceronline.com) International Journal of computational Engineerin...ijceronline
Call for paper 2012, hard copy of Certificate, research paper publishing, where to publish research paper,
journal publishing, how to publish research paper, Call For research paper, international journal, publishing a paper, IJCER, journal of science and technology, how to get a research paper published, publishing a paper, publishing of journal, publishing of research paper, research and review articles, IJCER Journal, How to publish your research paper, publish research paper, open access engineering journal, Engineering journal, Mathematics journal, Physics journal, Chemistry journal, Computer Engineering, Computer Science journal, how to submit your paper, peer review journal, indexed journal, research and review articles, engineering journal, www.ijceronline.com, research journals,
yahoo journals, bing journals, International Journal of Computational Engineering Research, Google journals, hard copy of Certificate,
journal of engineering, online Submission
1) The document discusses challenges with achieving interoperability between ultra large scale systems due to heterogeneity in platforms, data, and semantics.
2) It proposes a three-layered model for interoperability using web service technologies and semantic web approaches to address these challenges.
3) Key aspects of interoperability discussed include different levels (e.g. syntactic, semantic), use of ontologies to provide common understandings and resolve conflicts, and semantic web service approaches like OWL-S that semantically annotate service descriptions.
The document presents a framework called Hydrogen for patch verification. It consists of a novel program representation called a multiversion interprocedural control flow graph (MVICFG) that integrates and compares control flow graphs of multiple program versions. It also includes a demand-driven, path-sensitive symbolic analysis that traverses the MVICFG to detect bugs related to software changes and versions. The MVICFG representation and associated analysis enable efficient and precise patch verification across multiple versions of a program.
Software Refactoring Under Uncertainty: A Robust Multi-Objective ApproachWiem Mkaouer
This document describes a multi-objective robust optimization approach for software refactoring that accounts for uncertainty in code smell severity levels and class importance. The approach formulates refactoring as a multi-objective problem to find solutions that maximize both quality, by correcting code smells, and robustness to changes in severity levels and importance. An evaluation on six open source projects found the approach generates refactoring solutions comparable in quality to existing approaches but with significantly better robustness across different scenarios.
EVALUATION OF SOFTWARE DEGRADATION AND FORECASTING FUTURE DEVELOPMENT NEEDS I...ijseajournal
This article is an extended version of a previously published conference paper. In this research, JHotDraw (JHD), a well-tested and widely used open source Java-based graphics framework developed with the best software engineering practice was selected as a test suite. Six versions of this software were profiled, and data collected dynamically, from which four metrics namely (1) entropy (2) software maturity index, COCOMO effort and duration metrics were used to analyze software degradation, maturity level and use
the obtained results as input to time series analysis in order to predict effort and duration period that may
be needed for the development of future versions. The novel idea is that, historical evolution data is used to
project, predict and forecast resource requirements for future developments. The technique presented in
this paper will empower software development decision makers with a viable tool for planning and decision
making.
Automatically inferring structure correlated variable set for concurrent atom...ijseajournal
The at
omicity of correlated variables is
quite tedious and error prone for programmers to explicitly infer
and specify in
the multi
-
threaded program
. Researchers have studied the automatic discovery of atomic set
programmers intended, such as by frequent itemset
mining and rules
-
based filter. However, due to the
lack
of inspection of inner structure
,
some
implicit sematics
independent
variables intended by user are
mistakenly
classified to be correlated
. In this paper, we present a
novel
simplification method for
program
dependency graph
and the corresponding graph
mining approach to detect the set of variables correlated
by logical structures in the source code. This approach formulates the automatic inference of the correlated
variables as mining frequent subgra
ph of
the simplified
data and control flow dependency graph. The
presented simplified
graph representation of program dependency is not only robust for coding style’s
varieties, but also essential to recognize the logical correlation. We implemented our me
thod and
compared it with previous methods on the open source programs’ repositories. The experiment results
show that our method has less false positive rate than previous methods in the development initial stage. It
is concluded that our presented method
can provide programmers in the development with the sufficient
precise correlated variable set for checking atomicity
IMPLEMENTATION OF DYNAMIC COUPLING MEASUREMENT OF DISTRIBUTED OBJECT ORIENTED...IJCSEA Journal
This document summarizes a research paper that proposes a method for dynamically measuring coupling in distributed object-oriented software systems. The method involves three steps: instrumentation of the Java Virtual Machine to trace method calls, post-processing of the trace files to merge information, and calculation of coupling metrics based on the dynamic traces. The implementation results show that the proposed approach can effectively measure coupling metrics dynamically by accounting for polymorphism and dynamic binding, overcoming limitations of traditional static coupling analysis.
IMPLEMENTATION OF DYNAMIC COUPLING MEASUREMENT OF DISTRIBUTED OBJECT ORIENTED...IJCSEA Journal
Software metrics are increasingly playing a central role in the planning and control of software development projects. Coupling measures have important applications in software development and maintenance. Existing literature on software metrics is mainly focused on centralized systems, while work in the area of distributed systems, particularly in service-oriented systems, is scarce. Distributed systems with service oriented components are even more heterogeneous networking and execution environment. Traditional coupling measures take into account only “static” couplings. They do not account for “dynamic” couplings due to polymorphism and may significantly underestimate the complexity of software and misjudge the need for code inspection, testing and debugging. This is expected to result in poor predictive accuracy of the quality models in distributed Object Oriented systems that utilize static coupling measurements. In order to overcome these issues, we propose a hybrid model in Distributed Object Oriented Software for measure the coupling dynamically. In the proposed method, there are three steps
such as Instrumentation process, Post processing and Coupling measurement. Initially the instrumentation process is done. In this process the instrumented JVM that has been modified to trace method calls. During this process, three trace files are created namely .prf, .clp, .svp. In the second step, the information in these file are merged. At the end of this step, the merged detailed trace of each JVM contains pointers to the merged trace files of the other JVM such that the path of every remote call from the client to the server can be uniquely identified. Finally, the coupling metrics are measured dynamically. The implementation results show that the proposed system will effectively measure the coupling metrics dynamically.
The document discusses object-oriented system development and modeling. It covers topics like:
1. The main stages of traditional system development life cycles like requirements, analysis, design, implementation, and installation. As well as common life cycle models like waterfall, V-model, spiral, and prototyping.
2. Phases of object-oriented development focus on the state of the system rather than activities, including inception, elaboration, construction, and transition.
3. Modeling techniques for object-oriented systems including the Unified Modeling Language (UML), Rational Unified Process (RUP), abstraction, decomposition, and class-responsibility-collaboration (CRC) cards.
4
An Empirical Study Of Requirements Model UnderstandingKate Campbell
The document describes a study that compares two requirements modeling methods - Use Cases, a scenario-based approach, and Tropos, a goal-oriented approach. The study aims to evaluate how well novice requirements analysts can understand models created with each method. It involved 19 students performing tasks like determining consistency between models and system descriptions, understanding models, and modifying models. Preliminary results found that Tropos models seemed more comprehensible but took more time to understand compared to Use Case models. The full study aims to provide insights into which modeling method better supports requirements analysis tasks.
An Empirical Study Of Requirements Model Understanding Use Case Vs. Tropos M...Raquel Pellicier
The document describes a study that compares two requirements modeling methods - Use Cases and Tropos - to evaluate how well novice requirements analysts can understand models created with each method. The study involved 19 students performing tasks like determining consistency between models and system descriptions, understanding models for analysis activities, and modifying existing models. Preliminary results found that Tropos models seemed more comprehensible to analysts, though they took more time to understand compared to Use Case models. The goal was to provide experimental data on comprehending different modeling paradigms to help decide which to use for a given project.
Contributors to Reduce Maintainability Cost at the Software Implementation PhaseWaqas Tariq
This document discusses factors that can reduce software maintenance costs during the implementation phase. It identifies that maintenance costs are highest during software development phases. The objective is to define criteria to assess software quality characteristics and assist during implementation. This will help reduce maintenance costs by creating criteria groups to support writing standard code, developing a model to apply criteria, and increasing understandability. Student groups will study code standardization, write programs, and test software maintenance on programs to validate the model and proposed criteria.
The document discusses linguistic structures for incorporating fault tolerance into application software. It begins by explaining that as software complexity increases, software faults have become more prevalent and impactful, necessitating fault tolerance at the application level. It then establishes a set of desirable attributes for application-level fault tolerance structures and surveys current solutions, assessing each according to these attributes. The goal is to identify shortcomings and opportunities to develop improved fault tolerance structures.
BIO-INSPIRED REQUIREMENTS VARIABILITY MODELING WITH USE CASE mathsjournal
Background.Feature Model (FM) is the most important technique used to manage the variability through
products in Software Product Lines (SPLs). Often, the SPLs requirements variability is by using variable
use case modelwhich is a real challenge inactual approaches: large gap between their concepts and those of
real world leading to bad quality, poor supporting FM, and the variability does not cover all requirements
modeling levels.
Defect Management Practices and Problems in Free/Open Source Software ProjectsWaqas Tariq
With the advent of Free/Open Source Software (F/OSS) paradigm, a large number of projects are evolving that make use of F/OSS infrastructure and development practices. Defect Management System is an important component in F/OSS infrastructure which maintains defect records as well as tracks their status. The defect data comprising more than 60,000 defect reports from 20 F/OSS Projects is analyzed from various perspectives, with special focus on evaluating the efficiency and effectiveness in resolving defects and determining responsiveness towards users. Major problems and inefficiencies encountered in Defect Management among F/OSS Projects have been identified. A process is proposed to distribute roles and responsibilities among F/OSS participants which can help F/OSS Projects to improve the effectiveness and efficiency of Defect Management and hence assure better quality of F/OSS Projects.
The document summarizes the steps to request writing assistance from the website HelpWriting.net. It involves creating an account, completing an order form providing instructions and deadline, reviewing writer bids and qualifications, placing a deposit to start the assignment, reviewing and authorizing payment for the completed work. Customers can request revisions and the website guarantees original, high-quality content with refunds for plagiarism.
The document provides steps for identifying two unknown species of bacteria through a series of laboratory tests and observations. Students were first assigned an unknown bacterial sample and performed a gram stain to determine if each bacterium was gram positive or gram negative. The samples were then plated on agar using quadrant streaking to isolate the bacteria. The plates were incubated at different temperatures to determine optimal growth temperatures. Further tests like catalase, oxidase, triple sugar iron, urease, and Simmon's citrate were used to narrow down the identities of the unknown bacteria.
More Related Content
Similar to Assessing The Impact Of Global Variables On Program Dependence And Dependence Clusters
Classes that exhibit similar evolution profiles or change patterns may be interdependent. This document analyzes several software artifact evolution patterns and their relationship to faults, including:
1) Asynchronous and dephased change patterns between files were found to exist in several systems and can help with change propagation, team management, and fault understanding.
2) Classes with short or transient lifetimes were found to be more fault-prone than persistent classes.
3) Classes that co-evolved or shared the same evolution profile were not found to be significantly more fault-prone.
4) Classes with static dependencies or that co-changed with classes exhibiting certain anti-patterns like Blob or ComplexClass patterns were found
EVALUATION AND STUDY OF SOFTWARE DEGRADATION IN THE EVOLUTION OF SIX VERSIONS...csandit
This document discusses evaluating software degradation through six versions of the open-source software JHotDraw. Data was collected dynamically from each version using AspectJ to derive two metrics: entropy and software maturity index. Entropy measures complexity and was expected to decrease as the software evolved from adding new features. The software maturity index was derived from documentation and aimed to find if entropy decreases most when maturity is highest, implying a relationship between the two. Six versions of JHotDraw released over five years were tested to investigate changes in these metrics and degradation as the software evolved through new versions.
Lectura 2.2 the roleofontologiesinemergnetmiddlewareMatias Menendez
The document discusses the role of ontologies in supporting emergent middleware. Emergent middleware is dynamically generated distributed system infrastructure that enables interoperability in complex distributed systems.
Ontologies play a key role by providing meaning and reasoning capabilities to allow the right runtime choices to be made. They support various functions throughout an emergent middleware architecture, including discovery, composition, and mediation. Two experiments provide initial evidence of ontologies' potential role in middleware by enabling semantic matching and process mediation. However, challenges remain around generating ontologies and addressing interoperability between heterogeneous ontologies.
journal of object technology - context oriented programmingBoni
This document introduces Context-oriented Programming (COP) as a new programming technique to enable context-dependent computation. COP treats context explicitly and provides mechanisms to dynamically adapt behavior in reaction to context changes at runtime. The document discusses the motivation for COP, defines context and how it can influence behavior, and provides examples of COP implementations in different programming languages. COP aims to bring the same degree of dynamicity to behavioral variations as object-oriented programming brought to polymorphism.
IJCER (www.ijceronline.com) International Journal of computational Engineerin...ijceronline
This document provides a selective survey of software reliability models. It discusses both static models used in early development stages and dynamic models used later. For static models, it describes a phase-based model and predictive development life cycle model. For dynamic models, it outlines reliability growth models, including binomial, Poisson, and other classes. It also presents a case study of incorporating code changes as a covariate into reliability modeling during testing of a large telecommunications system. The document concludes by advocating for wider use of statistical software reliability models to improve development and testing processes.
IJCER (www.ijceronline.com) International Journal of computational Engineerin...ijceronline
Call for paper 2012, hard copy of Certificate, research paper publishing, where to publish research paper,
journal publishing, how to publish research paper, Call For research paper, international journal, publishing a paper, IJCER, journal of science and technology, how to get a research paper published, publishing a paper, publishing of journal, publishing of research paper, research and review articles, IJCER Journal, How to publish your research paper, publish research paper, open access engineering journal, Engineering journal, Mathematics journal, Physics journal, Chemistry journal, Computer Engineering, Computer Science journal, how to submit your paper, peer review journal, indexed journal, research and review articles, engineering journal, www.ijceronline.com, research journals,
yahoo journals, bing journals, International Journal of Computational Engineering Research, Google journals, hard copy of Certificate,
journal of engineering, online Submission
1) The document discusses challenges with achieving interoperability between ultra large scale systems due to heterogeneity in platforms, data, and semantics.
2) It proposes a three-layered model for interoperability using web service technologies and semantic web approaches to address these challenges.
3) Key aspects of interoperability discussed include different levels (e.g. syntactic, semantic), use of ontologies to provide common understandings and resolve conflicts, and semantic web service approaches like OWL-S that semantically annotate service descriptions.
The document presents a framework called Hydrogen for patch verification. It consists of a novel program representation called a multiversion interprocedural control flow graph (MVICFG) that integrates and compares control flow graphs of multiple program versions. It also includes a demand-driven, path-sensitive symbolic analysis that traverses the MVICFG to detect bugs related to software changes and versions. The MVICFG representation and associated analysis enable efficient and precise patch verification across multiple versions of a program.
Software Refactoring Under Uncertainty: A Robust Multi-Objective ApproachWiem Mkaouer
This document describes a multi-objective robust optimization approach for software refactoring that accounts for uncertainty in code smell severity levels and class importance. The approach formulates refactoring as a multi-objective problem to find solutions that maximize both quality, by correcting code smells, and robustness to changes in severity levels and importance. An evaluation on six open source projects found the approach generates refactoring solutions comparable in quality to existing approaches but with significantly better robustness across different scenarios.
EVALUATION OF SOFTWARE DEGRADATION AND FORECASTING FUTURE DEVELOPMENT NEEDS I...ijseajournal
This article is an extended version of a previously published conference paper. In this research, JHotDraw (JHD), a well-tested and widely used open source Java-based graphics framework developed with the best software engineering practice was selected as a test suite. Six versions of this software were profiled, and data collected dynamically, from which four metrics namely (1) entropy (2) software maturity index, COCOMO effort and duration metrics were used to analyze software degradation, maturity level and use
the obtained results as input to time series analysis in order to predict effort and duration period that may
be needed for the development of future versions. The novel idea is that, historical evolution data is used to
project, predict and forecast resource requirements for future developments. The technique presented in
this paper will empower software development decision makers with a viable tool for planning and decision
making.
Automatically inferring structure correlated variable set for concurrent atom...ijseajournal
The at
omicity of correlated variables is
quite tedious and error prone for programmers to explicitly infer
and specify in
the multi
-
threaded program
. Researchers have studied the automatic discovery of atomic set
programmers intended, such as by frequent itemset
mining and rules
-
based filter. However, due to the
lack
of inspection of inner structure
,
some
implicit sematics
independent
variables intended by user are
mistakenly
classified to be correlated
. In this paper, we present a
novel
simplification method for
program
dependency graph
and the corresponding graph
mining approach to detect the set of variables correlated
by logical structures in the source code. This approach formulates the automatic inference of the correlated
variables as mining frequent subgra
ph of
the simplified
data and control flow dependency graph. The
presented simplified
graph representation of program dependency is not only robust for coding style’s
varieties, but also essential to recognize the logical correlation. We implemented our me
thod and
compared it with previous methods on the open source programs’ repositories. The experiment results
show that our method has less false positive rate than previous methods in the development initial stage. It
is concluded that our presented method
can provide programmers in the development with the sufficient
precise correlated variable set for checking atomicity
IMPLEMENTATION OF DYNAMIC COUPLING MEASUREMENT OF DISTRIBUTED OBJECT ORIENTED...IJCSEA Journal
This document summarizes a research paper that proposes a method for dynamically measuring coupling in distributed object-oriented software systems. The method involves three steps: instrumentation of the Java Virtual Machine to trace method calls, post-processing of the trace files to merge information, and calculation of coupling metrics based on the dynamic traces. The implementation results show that the proposed approach can effectively measure coupling metrics dynamically by accounting for polymorphism and dynamic binding, overcoming limitations of traditional static coupling analysis.
IMPLEMENTATION OF DYNAMIC COUPLING MEASUREMENT OF DISTRIBUTED OBJECT ORIENTED...IJCSEA Journal
Software metrics are increasingly playing a central role in the planning and control of software development projects. Coupling measures have important applications in software development and maintenance. Existing literature on software metrics is mainly focused on centralized systems, while work in the area of distributed systems, particularly in service-oriented systems, is scarce. Distributed systems with service oriented components are even more heterogeneous networking and execution environment. Traditional coupling measures take into account only “static” couplings. They do not account for “dynamic” couplings due to polymorphism and may significantly underestimate the complexity of software and misjudge the need for code inspection, testing and debugging. This is expected to result in poor predictive accuracy of the quality models in distributed Object Oriented systems that utilize static coupling measurements. In order to overcome these issues, we propose a hybrid model in Distributed Object Oriented Software for measure the coupling dynamically. In the proposed method, there are three steps
such as Instrumentation process, Post processing and Coupling measurement. Initially the instrumentation process is done. In this process the instrumented JVM that has been modified to trace method calls. During this process, three trace files are created namely .prf, .clp, .svp. In the second step, the information in these file are merged. At the end of this step, the merged detailed trace of each JVM contains pointers to the merged trace files of the other JVM such that the path of every remote call from the client to the server can be uniquely identified. Finally, the coupling metrics are measured dynamically. The implementation results show that the proposed system will effectively measure the coupling metrics dynamically.
The document discusses object-oriented system development and modeling. It covers topics like:
1. The main stages of traditional system development life cycles like requirements, analysis, design, implementation, and installation. As well as common life cycle models like waterfall, V-model, spiral, and prototyping.
2. Phases of object-oriented development focus on the state of the system rather than activities, including inception, elaboration, construction, and transition.
3. Modeling techniques for object-oriented systems including the Unified Modeling Language (UML), Rational Unified Process (RUP), abstraction, decomposition, and class-responsibility-collaboration (CRC) cards.
4
An Empirical Study Of Requirements Model UnderstandingKate Campbell
The document describes a study that compares two requirements modeling methods - Use Cases, a scenario-based approach, and Tropos, a goal-oriented approach. The study aims to evaluate how well novice requirements analysts can understand models created with each method. It involved 19 students performing tasks like determining consistency between models and system descriptions, understanding models, and modifying models. Preliminary results found that Tropos models seemed more comprehensible but took more time to understand compared to Use Case models. The full study aims to provide insights into which modeling method better supports requirements analysis tasks.
An Empirical Study Of Requirements Model Understanding Use Case Vs. Tropos M...Raquel Pellicier
The document describes a study that compares two requirements modeling methods - Use Cases and Tropos - to evaluate how well novice requirements analysts can understand models created with each method. The study involved 19 students performing tasks like determining consistency between models and system descriptions, understanding models for analysis activities, and modifying existing models. Preliminary results found that Tropos models seemed more comprehensible to analysts, though they took more time to understand compared to Use Case models. The goal was to provide experimental data on comprehending different modeling paradigms to help decide which to use for a given project.
Contributors to Reduce Maintainability Cost at the Software Implementation PhaseWaqas Tariq
This document discusses factors that can reduce software maintenance costs during the implementation phase. It identifies that maintenance costs are highest during software development phases. The objective is to define criteria to assess software quality characteristics and assist during implementation. This will help reduce maintenance costs by creating criteria groups to support writing standard code, developing a model to apply criteria, and increasing understandability. Student groups will study code standardization, write programs, and test software maintenance on programs to validate the model and proposed criteria.
The document discusses linguistic structures for incorporating fault tolerance into application software. It begins by explaining that as software complexity increases, software faults have become more prevalent and impactful, necessitating fault tolerance at the application level. It then establishes a set of desirable attributes for application-level fault tolerance structures and surveys current solutions, assessing each according to these attributes. The goal is to identify shortcomings and opportunities to develop improved fault tolerance structures.
BIO-INSPIRED REQUIREMENTS VARIABILITY MODELING WITH USE CASE mathsjournal
Background.Feature Model (FM) is the most important technique used to manage the variability through
products in Software Product Lines (SPLs). Often, the SPLs requirements variability is by using variable
use case modelwhich is a real challenge inactual approaches: large gap between their concepts and those of
real world leading to bad quality, poor supporting FM, and the variability does not cover all requirements
modeling levels.
Defect Management Practices and Problems in Free/Open Source Software ProjectsWaqas Tariq
With the advent of Free/Open Source Software (F/OSS) paradigm, a large number of projects are evolving that make use of F/OSS infrastructure and development practices. Defect Management System is an important component in F/OSS infrastructure which maintains defect records as well as tracks their status. The defect data comprising more than 60,000 defect reports from 20 F/OSS Projects is analyzed from various perspectives, with special focus on evaluating the efficiency and effectiveness in resolving defects and determining responsiveness towards users. Major problems and inefficiencies encountered in Defect Management among F/OSS Projects have been identified. A process is proposed to distribute roles and responsibilities among F/OSS participants which can help F/OSS Projects to improve the effectiveness and efficiency of Defect Management and hence assure better quality of F/OSS Projects.
Similar to Assessing The Impact Of Global Variables On Program Dependence And Dependence Clusters (20)
The document summarizes the steps to request writing assistance from the website HelpWriting.net. It involves creating an account, completing an order form providing instructions and deadline, reviewing writer bids and qualifications, placing a deposit to start the assignment, reviewing and authorizing payment for the completed work. Customers can request revisions and the website guarantees original, high-quality content with refunds for plagiarism.
The document provides steps for identifying two unknown species of bacteria through a series of laboratory tests and observations. Students were first assigned an unknown bacterial sample and performed a gram stain to determine if each bacterium was gram positive or gram negative. The samples were then plated on agar using quadrant streaking to isolate the bacteria. The plates were incubated at different temperatures to determine optimal growth temperatures. Further tests like catalase, oxidase, triple sugar iron, urease, and Simmon's citrate were used to narrow down the identities of the unknown bacteria.
A network administrator has full control over a company's network. They are responsible for setting up and maintaining the network, including creating user accounts, installing and configuring servers, workstations and other devices, applying security updates, monitoring network activity and performance, troubleshooting issues, and ensuring the network runs smoothly. As the top level user on the network, administrators have unrestricted access and full control over all network resources and user permissions.
This document discusses case conceptualization and treatment planning for counseling clients. It describes how a therapist uses case conceptualization to determine a client's diagnosis and develop an appropriate treatment plan. The treatment plan must be tailored to the client and allow for adjustments to medications or modifications if needed. The goal is to develop a treatment plan that will effectively resolve the issues surrounding the client's diagnosis.
5 Paragraph Essay Outline Template DocDaphne Smith
The document outlines a 5-step process for requesting and receiving writing assistance from HelpWriting.net. Step 1 involves creating an account with a password and email. Step 2 is completing a 10-minute order form with instructions, sources, and deadline. Step 3 uses a bidding system to select a writer based on qualifications. Step 4 reviews the received paper and authorizes payment if satisfied. Step 5 allows for multiple revisions to ensure needs are fully met.
This document outlines the steps to request a paper writing service from HelpWriting.net:
1. Create an account with a password and email.
2. Complete a 10-minute order form providing instructions, sources, and deadline. Attach a sample if wanting the writer to imitate your style.
3. Review bids from writers based on qualifications, history, and feedback, then place a deposit to start the assignment.
4. Review the completed paper and authorize payment for the writer if pleased, or request free revisions.
1. Write An Essay On The Evolution Of ComputersDaphne Smith
The document outlines a 5-step process for getting writing help from HelpWriting.net, including registering for an account, completing an order form with instructions and deadline, reviewing writer bids and choosing one, placing a deposit to start the assignment, and reviewing and authorizing payment for the completed work. It notes the site uses a bidding system and offers free revisions to ensure customer satisfaction.
The document discusses the steps involved in getting writing help from HelpWriting.net. It outlines a 5-step process: 1) Create an account with a password and email. 2) Complete a 10-minute order form providing instructions, sources, and deadline. 3) Review bids from writers and choose one. 4) Receive the paper and ensure it meets expectations before authorizing payment. 5) Request revisions until fully satisfied, with the option of a full refund for plagiarized work. The document promotes HelpWriting.net by emphasizing satisfaction guarantees and original, high-quality content.
The document outlines 5 steps for using the HelpWriting.net service to get writing assistance: 1) Create an account with a password and email. 2) Complete a 10-minute order form providing instructions, sources, and deadline. 3) Review bids from writers and choose one based on qualifications. 4) Review the completed paper and authorize payment if satisfied. 5) Request revisions to ensure satisfaction, and the company offers refunds for plagiarized work.
The essay analyzes the film Gallipoli, focusing on the importance of mateship and sportsmanship in ANZAC culture as depicted in the relationship between the main characters Frank and Archy. Their bond is based on their shared experience as runners from Western Australia, where sport was a popular recruitment method. Their mateship develops through their athleticism and patriotism, though the film presents an idealized view of this relationship compared to the varied real-life experiences of ANZAC soldiers.
The document discusses the benefits of allowing celebrations and flags in college football compared to the NFL. It notes that flags in the NFL are frustrating for players and fans as they take the fun out of the game. College football allows more celebrations without penalties, keeping the game fun and encouraging players. If the NFL does not change its strict celebration rules, it risks losing fans and the fun aspect of the game, and possibly even players. In summary, the document argues that college football handles celebrations better than the NFL by not over-penalizing players and keeping the enjoyment in the sport.
5 Paragraph Descriptive Essay ExamplesDaphne Smith
The document provides instructions for requesting writing assistance from HelpWriting.net in 5 steps:
1. Create an account with a password and email.
2. Complete a 10-minute order form providing instructions, sources, and deadline.
3. Review bids from writers and choose one based on qualifications.
4. Review the completed paper and authorize payment if satisfied.
5. Request revisions to ensure satisfaction, with a refund option for plagiarized work.
5 Paragraph Essay Topics For 8Th GradeDaphne Smith
Acculturation is the process by which immigrants adapt to the culture of their new society. This document discusses how contact time influences acculturation. Specifically, it examines how the length of time immigrants have lived in a new country affects their adaptation to and adoption of the cultural values, behaviors and traditions of that society. Understanding acculturation is important because migration is common, so learning what helps or hinders the process can aid both immigrants and their new communities. This paper aims to study the impact of contact time on an immigrant's cultural adaptation.
5 Paragraph Essay On Flowers For AlgernonDaphne Smith
This document summarizes a court case between L.A. Construction Company and Southern Concrete Services regarding a subcontract for concrete delivery on a bridge construction project. Southern Concrete was the subcontractor but failed to deliver concrete in a timely manner and breached the contract in several ways, causing delays and costs to L.A. Construction. While performance improved briefly after complaints, issues resumed, ultimately leading L.A. Construction to request that the bonding company take over the contract to avoid further delays and costs.
Here are the key IT services AKH Technologies provides according to the passage:
- Application management outsourcing: Taking ownership of specific IT work functions for clients who lack resources to maintain existing systems.
- Training solutions: Custom training programs to maximize clients' teams' potential through IT-specific education.
- Additional IT services: The passage states AKH Technologies provides additional unspecified IT services to meet clients' business needs and drive business results through a proven track record of consistent delivery.
This document outlines 5 steps for requesting and receiving writing assistance from HelpWriting.net:
1. Create an account with a password and email.
2. Complete a 10-minute order form providing instructions, sources, deadline, and attaching a sample if wanting the writer to imitate your style.
3. Review bids from writers and choose one based on qualifications, history, and feedback. Place a deposit to start the assignment.
4. Review the completed paper and authorize full payment if pleased, or request free revisions.
5. Request multiple revisions to ensure satisfaction, as HelpWriting promises original, high-quality content or a full refund.
The document provides steps for requesting an assignment writing service from HelpWriting.net. It outlines 5 steps: 1) Create an account with valid email and password. 2) Complete a 10-minute order form providing instructions, sources, and deadline. 3) Review bids from writers and choose one based on qualifications. 4) Review the completed paper and authorize payment if satisfied. 5) Request revisions to ensure satisfaction, with a full refund option for plagiarized work. The process aims to match clients with qualified writers and provide original, high-quality content through revisions.
The document discusses the steps involved in requesting writing assistance from HelpWriting.net. It outlines 5 steps: 1) Create an account with valid email and password. 2) Complete a 10-minute order form providing instructions, sources, and deadline. 3) Review bids from writers and choose one based on qualifications. 4) Receive the paper and ensure it meets expectations before authorizing payment. 5) Request revisions to ensure satisfaction, with the promise of a refund if the paper is plagiarized.
The document outlines 5 steps to request writing assistance from HelpWriting.net:
1. Create an account with a password and email.
2. Complete a 10-minute order form providing instructions, sources, deadline, and attaching a sample work.
3. Review bids from writers for the request, choose one based on qualifications, and place a deposit.
4. Review the completed paper and authorize full payment if satisfied, or request free revisions.
5. Request multiple revisions to ensure satisfaction, and HelpWriting.net guarantees original, high-quality work with refunds for plagiarism.
The document examines the relationship between birth weight and developmental disabilities in children using data from the National Health Interview Survey. It finds that children with lower birth weights below 3,000g were more likely to have one or more developmental disabilities or special needs requiring educational services compared to children weighing 3,500-3,999g. Children below 2,500g also faced higher risks of multiple developmental disabilities. The study aims to evaluate how a child's full birth weight distribution correlates with prevalence of disabilities and health measures.
Gender and Mental Health - Counselling and Family Therapy Applications and In...PsychoTech Services
A proprietary approach developed by bringing together the best of learning theories from Psychology, design principles from the world of visualization, and pedagogical methods from over a decade of training experience, that enables you to: Learn better, faster!
This document provides an overview of wound healing, its functions, stages, mechanisms, factors affecting it, and complications.
A wound is a break in the integrity of the skin or tissues, which may be associated with disruption of the structure and function.
Healing is the body’s response to injury in an attempt to restore normal structure and functions.
Healing can occur in two ways: Regeneration and Repair
There are 4 phases of wound healing: hemostasis, inflammation, proliferation, and remodeling. This document also describes the mechanism of wound healing. Factors that affect healing include infection, uncontrolled diabetes, poor nutrition, age, anemia, the presence of foreign bodies, etc.
Complications of wound healing like infection, hyperpigmentation of scar, contractures, and keloid formation.
हिंदी वर्णमाला पीपीटी, hindi alphabet PPT presentation, hindi varnamala PPT, Hindi Varnamala pdf, हिंदी स्वर, हिंदी व्यंजन, sikhiye hindi varnmala, dr. mulla adam ali, hindi language and literature, hindi alphabet with drawing, hindi alphabet pdf, hindi varnamala for childrens, hindi language, hindi varnamala practice for kids, https://www.drmullaadamali.com
Chapter wise All Notes of First year Basic Civil Engineering.pptxDenish Jangid
Chapter wise All Notes of First year Basic Civil Engineering
Syllabus
Chapter-1
Introduction to objective, scope and outcome the subject
Chapter 2
Introduction: Scope and Specialization of Civil Engineering, Role of civil Engineer in Society, Impact of infrastructural development on economy of country.
Chapter 3
Surveying: Object Principles & Types of Surveying; Site Plans, Plans & Maps; Scales & Unit of different Measurements.
Linear Measurements: Instruments used. Linear Measurement by Tape, Ranging out Survey Lines and overcoming Obstructions; Measurements on sloping ground; Tape corrections, conventional symbols. Angular Measurements: Instruments used; Introduction to Compass Surveying, Bearings and Longitude & Latitude of a Line, Introduction to total station.
Levelling: Instrument used Object of levelling, Methods of levelling in brief, and Contour maps.
Chapter 4
Buildings: Selection of site for Buildings, Layout of Building Plan, Types of buildings, Plinth area, carpet area, floor space index, Introduction to building byelaws, concept of sun light & ventilation. Components of Buildings & their functions, Basic concept of R.C.C., Introduction to types of foundation
Chapter 5
Transportation: Introduction to Transportation Engineering; Traffic and Road Safety: Types and Characteristics of Various Modes of Transportation; Various Road Traffic Signs, Causes of Accidents and Road Safety Measures.
Chapter 6
Environmental Engineering: Environmental Pollution, Environmental Acts and Regulations, Functional Concepts of Ecology, Basics of Species, Biodiversity, Ecosystem, Hydrological Cycle; Chemical Cycles: Carbon, Nitrogen & Phosphorus; Energy Flow in Ecosystems.
Water Pollution: Water Quality standards, Introduction to Treatment & Disposal of Waste Water. Reuse and Saving of Water, Rain Water Harvesting. Solid Waste Management: Classification of Solid Waste, Collection, Transportation and Disposal of Solid. Recycling of Solid Waste: Energy Recovery, Sanitary Landfill, On-Site Sanitation. Air & Noise Pollution: Primary and Secondary air pollutants, Harmful effects of Air Pollution, Control of Air Pollution. . Noise Pollution Harmful Effects of noise pollution, control of noise pollution, Global warming & Climate Change, Ozone depletion, Greenhouse effect
Text Books:
1. Palancharmy, Basic Civil Engineering, McGraw Hill publishers.
2. Satheesh Gopi, Basic Civil Engineering, Pearson Publishers.
3. Ketki Rangwala Dalal, Essentials of Civil Engineering, Charotar Publishing House.
4. BCP, Surveying volume 1
Leveraging Generative AI to Drive Nonprofit InnovationTechSoup
In this webinar, participants learned how to utilize Generative AI to streamline operations and elevate member engagement. Amazon Web Service experts provided a customer specific use cases and dived into low/no-code tools that are quick and easy to deploy through Amazon Web Service (AWS.)
it describes the bony anatomy including the femoral head , acetabulum, labrum . also discusses the capsule , ligaments . muscle that act on the hip joint and the range of motion are outlined. factors affecting hip joint stability and weight transmission through the joint are summarized.
Assessing The Impact Of Global Variables On Program Dependence And Dependence Clusters
1. This article appeared in a journal published by Elsevier. The attached
copy is furnished to the author for internal non-commercial research
and education use, including for instruction at the authors institution
and sharing with colleagues.
Other uses, including reproduction and distribution, or selling or
licensing copies, or posting to personal, institutional or third party
websites are prohibited.
In most cases authors are permitted to post their version of the
article (e.g. in Word or Tex form) to their personal website or
institutional repository. Authors requiring further information
regarding Elsevier’s archiving and manuscript policies are
encouraged to visit:
http://www.elsevier.com/copyright
2. Author's personal copy
Assessing the impact of global variables on program dependence and
dependence clusters
David Binkley *,1
, Mark Harman, Youssef Hassoun, Syed Islam, Zheng Li
King’s College London, Centre for Research on Evolution, Search and Testing (CREST) Strand, London WC2R 2LS, UK
a r t i c l e i n f o
Article history:
Received 27 October 2008
Received in revised form 24 February 2009
Accepted 19 March 2009
Available online 1 April 2009
Keywords:
Dependence cluster
Program slice
Global variable
a b s t r a c t
This paper presents results of a study of the effect of global variables on the quantity of dependence in
general and on the presence of dependence clusters in particular. The paper introduces a simple transfor-
mation-based analysis algorithm for measuring the impact of globals on dependence. It reports on the
application of this approach to the detailed assessment of dependence in an empirical study of 21 pro-
grams consisting of just over 50K lines of code. The technique is used to identify global variables that
have a significant impact upon program dependence and to identify and characterize the ways in which
global variable dependence may lead to dependence clusters. In the study, over half of the programs
include such a global variable and a quarter have one that is solely responsible for a dependence cluster.
Ó 2009 Elsevier Inc. All rights reserved.
1. Introduction
Global variables are generally deprecated in advice to program-
mers, with many authors arguing that they have negative effects
(Marshall and Webber, 2000; Sward and Chamillard, 2004; Wulf
and Shaw, 1973). The use of global variables has been argued to
have harmful effects on many aspects of software engineering,
including maintainability (Yu et al., 2004) and correctness (Barnes,
2003). Some programming practitioners have gone so far as to sug-
gest, perhaps only semi-seriously, that programmers might be
fired for using global variables (Tennberg, 2002). Of course, the
introduction of global variables can produce potential efficiency
gains (Sestoft, 1989), but such global-introduction transformations
are performed as a meaning-preserving compiler optimization, not
as an approach to source-level code improvement.
Though the typical view in programming language texts (for
example Strouptrup’s C++ book (Stroustrup, 2000)) is that global
variables may often be a source of problems, this view is not uni-
versally held. Some authors have even suggested that global vari-
ables should be used in place of local variables (Fisher, 1983).
However, despite much debate and advice on the use of global
variables over several decades, there remains little empirical study
of the effects of global variables.
Program dependence, which captures the influence of one pro-
gram component on another, is important because it has a bearing
on so many aspects of software engineering. For example, program
dependence has been linked to the ease with which a program can
be understood, in work on program comprehension (Balmas, 2002;
Deng et al., 2001). The effect of program dependence is also felt in
software maintenance and re-engineering, where it delimits the
changes that may be performed (Gallagher and Lyle, 1991; Tonella,
2003) and captures the impact that such changes will have (Black,
2001).
Dependence analysis forms the cornerstone for many software
engineering activities that rely upon program analysis, such as pro-
gram comprehension (Balmas, 2002; Deng et al., 2001), impact
analysis and reduction (Black, 2001; Tonella, 2003), reuse(Cimitile
et al., 1996), software maintenance (Gallagher and Lyle, 1991),
testing and debugging (Binkley, 1997; Harman et al., 2004;
Podgurski and Clarke, 1990), virus detection (Lakhotia and Singh,
2003), and restructuring and reverse engineering (Beck and
Eichmann, 1993; Lakhotia and Deprez, 1998).
This paper presents a technique used to study the effects of
global variables on dependence as well as results from empirical
studies of these effects. The impact of even a single global can be
very far-reaching; in some of the programs studied, a single global
was found to account for most of the program’s overall dependence
connectivity.
The paper is also concerned with the effect that a global variable
has on the presence or absence of large dependence clusters. A
dependence cluster is a set of program statements, all of which
are dependent upon one another. Recent work (Binkley and Har-
man, 2005b) has shown that dependence clusters are surprisingly
prevalent. Therefore, one of the additional goals of the study
reported herein is to explore the ways in which global variables
can lead to dependence clusters.
0164-1212/$ - see front matter Ó 2009 Elsevier Inc. All rights reserved.
doi:10.1016/j.jss.2009.03.038
* Corresponding author.
E-mail address: binkley@loyola.edu (D. Binkley).
1
On sabbatical leave from Loyola College in Maryland.
The Journal of Systems and Software 83 (2010) 96–107
Contents lists available at ScienceDirect
The Journal of Systems and Software
journal homepage: www.elsevier.com/locate/jss
3. Author's personal copy
As this paper will show, global variables can be the cause of
dependence clusters. The ability to identify the causes of depen-
dence clusters has implications for work related to these analyses.
For example, in program comprehension, several authors have con-
sidered hierarchical decomposition as a navigation mechanism
that manages the cognitive complexity of program dependence
(Balmas, 2002; Deng et al., 2001). However, the presence of a
dependence cluster will lead to a degenerate collapsed hierarchy,
in which such decomposition will be difficult. The ability to iden-
tify causes of clusters may allow such navigation tools to either
avoid them or to treat them as special cases. Furthermore, the abil-
ity to break some clusters will improve the applicability of these
approaches.
The paper makes the following primary contributions:
(1) It introduces an algorithm for variable substitution that
allows the dependence due to a particular global variable
to be ignored. This is used to assess and measure the effects
of the global variable on dependence.
(2) The paper presents quantitative results that assess the effect
on dependence of 849 global variables in 21 programs. The
study reveals that more than half the programs considered
have individual global variables that have a significant
impact on overall program dependence.
(3) The paper presents further qualitative results of the effect
these high dependence globals have on large dependence
clusters. In some cases, a single global was found to be the
sole cause of a cluster, establishing evidence for a link
between the use of globals and the presence of large depen-
dence clusters. The study also investigates the categories
into which these effects fall and the source code constructs
that cause them.
(4) Finally, the paper presents a case study in which sets of glo-
bal variables collectively combine to cause dependence
clusters.
The rest of the paper is organised as follows: Section 2 pre-
sents background material on dependence clusters. Section 3
introduces the algorithm for eliminating dependence due to a glo-
bal variable and uses this to measure the effect of each global has
on overall program dependence. Sections 4 and 5 present the five
research questions addressed by the experiments and the exper-
imental design. Sections 6 and 7 present the results of the quan-
titative and qualitative studies of global variable dependence
effects, while Section 8 presents the case study of multi-global
variable dependence effects. Sections 9 and 10 consider threats
to validity and related work, and finally, Section 11 summarises
the paper.
2. Dependence clusters
A dependence cluster is a maximal set of program points all of
which are mutually inter-dependent. Being maximal, a depen-
dence cluster is not contained within any other dependence
cluster. One complexity in identifying dependence clusters is the
non-transitive nature of dependence for certain language con-
structs such as threads (Krinke, 1998) and procedures (Horwitz
et al., 1990). This means that dependence clusters are not simply
strongly connected components within a dependence graph.
Fortunately, context-sensitive interprocedural dependence analy-
sis captures the necessary calling-context information (Horwitz
et al., 1990).
To visualize dependence clusters the Monotone Slice-size Graph
(MSG) was introduced (Binkley and Harman, 2005b). A slice is a
sub-program that captures a semantically meaningful sub-compu-
tation from a program. Slices can be efficiently computed using the
System Dependence Graph (SDG) (Horwitz et al., 1990). An MSG is
a graph of all a program’s slice sizes plotted in monotonically
increasing order on the x-axis. The y-axis measures slice size.
For ease of comparison the MSGs shown in this paper use the
percentage of the entire program on the y-axis. Thus, an MSG plots
a landscape of monotonically increasing slice sizes, in which
dependence clusters correspond to sheer-drop cliff faces followed
by a long flat plateau (e.g., see the black line in Fig. 1).
To illustrate cluster identification using an MSG, Fig. 1 shows
four example MSGs. As explained below, each graph includes two
MSGs: an original MSG in black and a post reduction MSG in grey.
For example, the MSG in black in the left chart shows (reading
along the horizontal axis) that approximately the first 20% of slices
are very small (containing about 10% of the program), after which
the MSG reveals a sharp increase to almost 80% of the program.
Most of the remaining slices are all of essentially the same size.
This is visual evidence of a dependence cluster in the program.
The MSG for a program devoid of clusters is shown in grey under
this line.
3. Assessing the impact of global variables on dependence
The impact of global variable g is measured in terms of the area
under the MSG. That is, the difference in the area for the MSG con-
structed from the program with and without the dependence due
to g will be deemed to denote the impact of g on overall program
dependence.
Of course, reducing the quantity of overall program dependence
may not mean the breaking of a dependence cluster for which MSG
area reduction is a necessary, but not sufficient condition. This is
illustrated in Fig. 1 where the two charts show the two kinds of
Fig. 1. Two kinds of reduced dependence. In the left chart, the dependence cluster is broken. In the right chart, the cluster size is reduced, but the cluster remains unbroken.
D. Binkley et al. / The Journal of Systems and Software 83 (2010) 96–107 97
4. Author's personal copy
reduction. Both original (black) MSGs include a considerable clus-
ter. The (lower) grey MSG in the left chart shows the result of
ignoring the dependence associated with the global variable enco-
dings; this clearly breaks the cluster. In the chart on the right the
(lower) grey MSG shows only a reduction in the size of the slices
making up the cluster. In this example the cluster itself is still
present.
In order to measure the impact of ignoring the dependences
associated with a given global variable, the approach adopted in
this paper transforms the program to remove all such dependence.
However, although this may be simple in principle, it turns out that
the program analysis task of ‘ignoring dependence’ is not entirely
trivial in practice.
The goal is to ignore all dependence that can be caused by def-
initions and references of the global variable. To achieve this, it is
insufficient simply to mark as untraversable dependences due a gi-
ven global variable. The reason for this is that such an edge-ignor-
ing approach will not take into account the secondary effects that a
global variable has. For example, upon computations that support
the identification of dependence, such as dependencies due to
pointers and the summary edge computation used by CodeSurfer
Grammatech Inc. (2002) to build an SDG.
To illustrate, consider ignoring the dependence caused by global
variable g2 in the code fragment and partial SDG shown in Fig. 2.
Because p may point to g1 or g2, the assignment *p = 23 does not
definitely kill the assignment g1 ¼ 42. However, in the absence
of g2, *p = 23 kills the assignment g1 ¼ 42 because p only points
to g1. This obviates the need for the data dependence edge from
g1 ¼ 42 to local ¼ g1.
As this example shows, simple edge marking is insufficient. To
overcome this, the paper introduces a simple transformation to
syntactically delete the global variable from the AST. Of course,
such a transformation is not meaning preserving. It is not intended
to be. It merely serves the purpose of ignoring all dependence due
to the global variable under consideration. The resulting SDG is
used to analyse the impact of the global by comparison to the ori-
ginal SDG (with all dependence due to the global included). In this
way, the global variable’s impact upon dependence can be fully
assessed.
To study the impact of ignoring the dependences due to a global
variable, it is pragmatic to observe two principles:
(1) The size of the program should not be changed more than
necessary.
(2) The effects of dependences not due to the global should
remain unaffected.
Taking these two concerns into account, the transformation
rules replace each occurrence of a global in such a way that depen-
dence structure is otherwise unchanged while having minimum
impact on program size. For example, consider the assignment
a = g + b where only the dependencies associated with global var-
iable g is to be ignored. A minimum impact replacement, assuming
that g is an int variable, would be to replace g with a constant of the
appropriate type (e.g.,0). While this would clearly change the
meaning of the program (unless g happens to always have the va-
lue 0), the revised statement a = 0 + b preserves the dependence of
a on b and also the size of the program (in this case right down to
the character level); its sole effect is to removal the dependence of
the statement on global variable g.
In the general case, any type-appropriate constant will suffice.
In the formalization of the replacement, the name ‘TAC’ is used to
denote this type-appropriate constant. For example, the constant
((int *)0) is used for a global variable of type int *.
To further illustrate the transformation, consider the replace-
ment of the expression ‘‘g++”. This is a two step process. The first
replaces g++ with g and then second replaces g with TAC. The
resulting expression removes dependencies associated with g,
but does not change the number of SDG vertices used to represent
the code; thus, in terms of the analysis presented in the next sec-
tion, this change does not impact the program’s size.
A careful analysis of the grammar for C allows program size
reduction to be kept to a minimum. For example, C treats the array
expression a[i] internally as the pointer expression *(a + i), which is
equivalent to *(i + a) and hence i[a]. This allows occurrences of a
global array g to be replaced with a constant and thus removes
the dependence associated with the global array. For example, glo-
bal_array[i++] is transformed into TAC[i++] where the resulting
program omits dependencies associated with global_array, but
maintains the dependencies associated with i.
The formal rules for when a type-appropriate constant is suit-
able are given by the function DL (which identifies expressions that
denote a named location). When this function returns bottom ð?Þ
then the use of TAC is permitted. Otherwise, the expression involv-
ing the global, must be deleted from the program. In the transfor-
mation rules, this latter case leads to the global being rewritten to
the empty string, k. In most cases, such a replacement does not im-
pact the program’s (SDG’s) size. Its effect is primarily on the source
represented by a given SDG vertex.
The remainder of the section formalizes the global variable
dependence-elimination transformation. The algorithm, which
operates on C programs, is presented as a set of transformation
rules. Modification for other similar languages (e.g., C++) is
straightforward.
Transformation rules are written as
constraint set
input source
output source
and make
use of the following notation:
TAC denotes a type appropriate constant;
k denotes the empty string;
½xjy denotes the selection of x or y;
var(declarator) denotes the variable declared.For example,
var(a[]) and var(*a) are both a;
g denotes the global targeted by the transformation.
The rules transform declarations and expressions that involve g.
In general, C language declarations involve storage class and type
specifiers followed by a list of declarators. The transformation,
shown in the upper left of Fig. 3 removes declarators that resolve
to g (e.g., g; g [10], or **g). While syntactically valid, if this leaves
an otherwise empty declaration, the tool removes the entire
declaration.
For most expression occurrences of g it is sufficient to replace g
with a TAC using the rule at the upper right of Fig. 3. However,
there are four special cases involving lvalues that require special
treatment because an lvalue denotes a modifiable memory location
(Ritchie, 1975). Lvalues are defined by the following production
from the C grammar (Ritchie, 1975):
Fig. 2. Illustration were marking edges is insufficient.
98 D. Binkley et al. / The Journal of Systems and Software 83 (2010) 96–107
5. Author's personal copy
lvalue ! identifier
! expression
! primary ‘½’ expression ‘’
! primary identifier
! lvalue : identifier
! ðlvalueÞ
Only lvalues that denote (part of the) the location represented
by g require special treatment. Those that simply involve g in
denoting some other location do not. For example, no special treat-
ment is needed for g[i], which is replaced by TAC[i] to correctly re-
tain the use of i. As noted above, this is legal in C and thus a
convenient method for achieving the minimum impact removal
of the dependence induced by the global. In the four special cases,
the function DL identifies those lvalues that denote a location di-
rectly associated with an identifier:
fun DL identifier = identifier
j DL * expression = ?
j DL primary ‘[’ expression ‘]’ = ?
j DL primary - identifier = ?
j DL lvalue . identifier = DL(lvalue)
j DL ( lvalue ) = DL(lvalue)
Thus, DLða½i þ þÞ ¼? because there is no identifier associated
with the incremented memory location, while DL (p.x) = p because
part of the location referred to by p is modified.
The four special cases are shown as the bottom four rules in
Fig. 3; they occur when (some part of) g’s address is taken, when
it is modified by an assignment operator, and when an increment
or decrement operator is involved. In the first case, NULL is used
rather than TAC. For the second, the assignment portion of the
assignment expression is removed. That is, ‘‘g+= i++” is trans-
formed into ‘‘i++”. For the final two cases, the increment or decre-
ment operator is simply removed.
It is interesting to note that, for occurrences of g such as *g = 2,
the general rule is sufficient because g itself is not being modified.
In practice this leaves an anonymous location being updated
(through TAC). Such locations do not cause dependencies in the
SDG, and thus the transformation retains only the effect of the
right-hand side of the assignment expression. There are other sim-
ilar cases when the transformation removes parts of statements
that have no effect on the program’s dependencies. For example,
the transformation of ‘‘g++” leaves ‘‘TAC”. Further examples are
shown in Fig. 4.
4. Research Questions
The empirical analysis addresses three sets of research ques-
tions, concerning the qualitative and quantitative effects of single
globals and the combined effects of sets of globals (multiple glo-
bals). The study reports results on the quantitative effects that glo-
bal variables have upon program dependence in general and the
specific qualitative effect that high dependence globals have upon
the program’s large dependence clusters.
The majority of the results concern the effects of single globals
because these potentially produce the most valuable information
to the software engineer. That is, if it is possible to identify a single
global variable that can be deemed to be responsible for a depen-
dence cluster, then the engineer has a chance to consider the
meaning of this variable and ways in which it might be possible
to account for, ameliorate or otherwise reduce the impact of the
dependence cluster. Clearly, where clusters have multiple causes,
such amelioration may also be possible, but it is likely that dealing
with single causes will be preferable.
Research Question RQ1.1:
Overall quantitative effect of single global
Over all programs, what proportion of dependence is due to glo-
bal variables and how many globals have a significant impact
(as defined in Section 5.2) to total program dependence?
Research Question RQ1.2:
Quantitative effect of single global on each program
For each program considered separately, how many globals
have a significant impact (again using the statistical tests
described in Section 5.2) on dependence and what is the magni-
tude of their effect? (This question make more sense knowing
the results for RQ1.1.)
Research Question RQ2.1:
Qualitative assessment of effect of single global on dependence
clusters
What effects do global variables have upon dependence
clusters?
Research Question RQ2.2:
Qualitative assessment of the causes in source code
What source code patterns are found that correspond to the
effects of global variables on dependence clusters?
The four research questions above concern the effects of single
global variables. Research Question RQ3 concerns the effects
of multiple globals on dependence clusters.
Research Question RQ3:
Qualitative and quantitative effect of multiple globals on each
program
What effects can be found where multiple global variables par-
ticipate collectively in causing dependence clusters?
Fig. 3. Transformations for removing variable g.
Fig. 4. Transformation examples showing the replacement of g with TAC.
D. Binkley et al. / The Journal of Systems and Software 83 (2010) 96–107 99
6. Author's personal copy
5. Experimental design
This section briefly describes the programs studied and the tests
used to assess whether a global variable has a significant impact
upon dependence.
5.1. Programs studied and their MSGs
The 21 programs used as subjects in the study are described in
Fig. 5. They were taken from online repositories such as direc-
tory.fsf.org and planet-source-code.com except for space which is code
created for the European Space Agency. For each program, the fig-
ure includes the name of the program, the number of slices taken
when analyzing it, its size in lines of code as counted by the unix
utility wc and the number of non-blank non-comment lines of code
as counted by sloc (Wheeler, 2005), and finally a brief description.
These programs cover a variety of application domains such as pro-
gramming utilities, terminal software, booking systems, simula-
tions, games, interpreters, and code transformation tools.
5.2. Statistical tests applied
The statistical tests used in this paper follow the well estab-
lished approach to measuring the significance of effects adopted
in work on control limits theory (Reid and Sanders, 2007). Two
statistical tests are applied to determine those global variables that
have significant impact on overall program dependence. The first is
based on the notion of outliers, while the second is based on vari-
ance. A global variable is taken to have a significant impact upon
program dependence if it is determined to be significant by both
tests. The reason for considering both tests is to reinforce each
individual test. Furthermore, tests based upon outliers are more
robust against non-normality in the data distribution. Finally, note
that most common statistical tests, such as a t-test, are not appli-
cable in this case. Many such tests assume that the data come from
normal population and compare population means but are not
designed to test if a given value is expected to lie outside a
population.
Using the outlier approach (Armitage and Berry, 1994), a point
is significant if it lies three times the interquartile range below (or
above) the mean. Thus, using this approach, a global variable has a
significant impact upon program dependence if it causes a reduc-
tion in the area under the MSG that is more than three times the
interquartile range below the mean. In the variance approach
(Armitage and Berry, 1994), a point more than two standard
deviations below (or in general above) the mean is significantly
different, while a point three or more standard deviations below
the mean is taken as highly significantly different. Thus, using this
approach, a global variable has a significant impact upon program
dependence if it causes a reduction in the area under the MSG that
is two standard deviations below the mean. Furthermore, it has a
highly significant impact if it causes a reduction in the area under
the MSG that is three standard deviations below the mean.
In the data analysis the interquartile range and the standard
deviation are computed using two different samples. The first
includes all the data from all the programs collectively; a global
variable is considered to be significant iff it causes a significant
impact on the quantity of dependence, relative to the mean of all
849 variables considered in the whole study. The second approach
considers each program in isolation; thus, a global variable is
considered to be significant iff it causes a significant effect on
dependence, relative to the mean of all global variables in that
program.
It turns out that, for all programs, the variance approach is a
stricter test for significance than the outlier approach: if a result
is significant according to the variance approach, then it is also
significant with respect to the outlier approach (though not neces-
sarily vice versa). In the experiment while both tests were applied
and the resulting sets of global variables intersected, the result was
Fig. 5. The 21 programs studied.
Fig. 6. Histogram showing the counts (on the y-axis) of globals leading to different ranges of dependence reduction. As the left-hand histogram shows, most global variables
have little effect; they fall into the 0–10% range. The right-hand histogram zooms in on the remaining data which shows reductions from 10% to100%.
100 D. Binkley et al. / The Journal of Systems and Software 83 (2010) 96–107
7. Author's personal copy
always the same as if only the more strict variance approach was
used.
6. Impact on dependence levels
The quantitative study is concerned with addressing Research
Questions RQ1.1 and RQ1.2. It provides results of the assessment
of the impact of each global variable (of 849 in total) on the depen-
dence in the programs in which these globals reside.
6.1. RQ1.1: overall effects of single global causes
For each of the programs, the MSG was first computed using the
unmodified program. Then, it was re-computed 849 times. Each re-
computation ignored the dependence due to one of the global vari-
ables. Dependence effects were then assessed by comparing the
area under the MSGs.
Over all globals in the study, the average area remaining after a
global variable’s dependences are ignored was 97.4% with a stan-
dard deviation of 8.6%. Fig. 6 shows a histogram of results for the
reduction due to globals. Clearly, most globals cause little reduc-
tion: 800 of the 849 global variables cause no more than a ten per-
cent reduction. However, more importantly, as can be seen in the
right hand detailed histogram, there do exist some global variables
that have a considerable impact on the quantity of dependence in
the programs in which they reside.
Fig. 7 shows the 30 of the 849 global variables that have a signif-
icant impact, with those having a highly-significant impact shown
in black. The y-axis shows the remaining dependence for each glo-
bal variable shown on the x-axis. This represents 3.5% of all the glo-
bal variables. Therefore, the answer to RQ1.1 is that there are a few
individual global variables that have a significant effect on the
quantity of program dependence, but most globals have little effect.
6.2. RQ1.2: per program effects of single global causes
The answer to Research Question RQ1.1 suggests that there are
only a few global variables that have a significant effect on depen-
dence. A natural question to ask is whether these are concentrated
in a few programs that are in some way ‘special cases’, or whether
there are many programs that contain a global that causes a signif-
icant effect on dependence. This is the question raised by Research
Question RQ1.2.
Over all 21 programs, twelve programs include at least one of
the variables identified in the previous section as causing an over-
all significant dependence reduction. Fig. 8 shows the reductions
for the top four variables from each of these twelve programs
(the break down of the reduction kinds are discussed in the next
section). It is noticeable from this figure that the second and
subsequent global variables have considerably lower impact than
that of the first global variable. These results provide an intriguing
answer to Research Question RQ1.2: though there may be few
significant global variables overall, more than half the programs
studied have only one of them.
Section 5.2 describes two statistical tests that are applied to
determine if a global variable causes a significant difference. These
Fig. 7. Impact of ignoring dependencies for each global variable. The chart measures the remaining dependence on the y-axis for the 3.5% of the global variables that have a
significant impact. Highlighted in black are those that have a highly significant impact.
Fig. 8. Impact of ignoring dependencies for each global variable. The chart shows the four reduction causing global variables for programs where at least one significant global
variable exists.
D. Binkley et al. / The Journal of Systems and Software 83 (2010) 96–107 101
8. Author's personal copy
tests can be applied to the entire pool of global variables (as done
above) or to the globals of each programs independently. This sec-
ond version asks not whether a variable, g is significant among all
variables studied, but whether g is significant among only those
globals that reside in the same program as g. The results serve to
strengthen confidence in the answer to RQ1.2.
Fig. 9 shows a comparison of the two approaches to assessing
significance. Fig. 10 shows the visual relationship between the
two approaches to testing for significance. As can be seen from
Fig. 9, apart from one case (the program time), all those programs
that contain global variables that are significant among all globals,
also contain globals that are significant in the program in which
they reside. This lends additional evidence to support the belief
that many programs may contain globals with a significant effect
on dependence.
7. Impact on dependence clusters
The previous section considered the quantitative impact of glo-
bal variables on dependence in general. This section focuses on the
specific impact of global variables on the dependence clusters of
the programs studied.
7.1. RQ2.1: overall categorization of the effects of single global causes
From Section 2, a reduction in area under the MSG is a neces-
sary, but not sufficient condition for the breaking of a dependence
cluster. Research Question RQ2.1 asks if the reduction in area un-
der the MSG is accompanied by a corresponding breaking of clus-
ters. This is a more subjective determination.
To address this question the 849 MSGs were examined. Four
patterns emerge, which will be denoted: break, partial break, sub-
cluster break, and drop. Representative examples of the four are
shown in Fig. 11 where each graph shows two MSGs: the original
MSG in black and the MSG after ignoring the dependence of the gi-
ven global variable in grey. For example, the MSGs for barcode
shown at the far left illustrate the break case where a large depen-
dence cluster disappears. Next to this is a partial break. Here the
MSG (when ignoring buffr of pc2c) shows a clear but partial break-
ing of the dependence clusters of pc2c. In this case approximately
three fifths of the cluster disappears. Next to this, is a sub-cluster
break where the one large cluster in the MSG for ctags fractures into
a collection of smaller clusters (evidenced by the stair step pattern
in the MSG). The final MSG illustrates the drop case where no clus-
ter breaking occurs; the size of the slices making up the cluster are
simply reduced, though the cluster remains. Thus, the characteris-
tic cliff-face and plateau is still present but with reduced height.
As shown in Section 6 using the data over all programs, 29 glo-
bal variables are significant, while using the per program data, 26
global variables are identified. Those global variables identified by
the per-program data that are not by the all-program data are all
drops of small magnitude.
Fig. 12 summarizes the categorization of the effect of each sig-
nificant global. Numerically, the 16 global variables significant in
both approaches produced three breaks, six partial breaks, two
sub-cluster breaks, and five drops. Thus, in total, ignoring the
dependence associated with just over half of the significant global
variables and 1.3% of all globals (11 of the 849), led to the breaking
of clusters. Fig. 8 includes the categorization for each program
using the data over all programs.
Over half the global variables that are considered significant
(either per program or overall) play an important role in the forma-
tion of a dependence cluster. Furthermore, as can be seen from
Fig. 8, all of the programs that contain a global variable that has
a significant effect on dependence also contain a global variable
with a non-trivial role to play in the formation of large dependence
clusters.
7.2. RQ2.2: source code features that appear to cause clusters
To answer Research Question RQ2.2 the source code of each
program with one or more significant global variable was in-
spected in an attempt to identify source code patterns that cause
the use of globals to lead to the presence of dependence clusters.
Four patterns emerge. These will be denoted: central data structure,
top-up, lazy programmer, and library. Each is now defined and illus-
trated using case study examples from the code studied.
Examples of a central data structure include the board in a game,
the memory registers used by interpreter, the current instruction pro-
cessed by the assembler fass, and current state of the parser within
the pretty printer indent. In most cases, ignoring the dependence
associated with such a global variable simply causes a drop, but
not a breaking of dependence clusters. This is primarily because
dependencies involving other variables continue to tie the dispa-
rate parts of the cluster together.
Fig. 10. Number of programs with various numbers of global variables having a
significant impact.
Fig. 9. For each program, the mean reduction leading to the number of globals
causing a significant reduction (penultimate column). Each line of the table includes
the mean area when ignoring the dependence of each global, the standard deviation
of the mean, the cutoff for a significant impact, and the two counts. The final
column is the number of significant globals using the mean over the entire
collection of programs, which is shown in the first line. The program replace has no
globals and the program space has only one.
102 D. Binkley et al. / The Journal of Systems and Software 83 (2010) 96–107
9. Author's personal copy
However, in some cases, the central data structure is all that
binds the cluster; ignoring its dependencies breaks it. For example,
in fass, the current instruction is held in a central data structure.
Ignoring this global variable’s dependencies disconnects the code
for processing each kind of instruction.
The second pattern, top up, is caused by a variable that adds
an often small increment to a large collection of slices. The
most common cause of this pattern is an input buffer where
the reading of the input is part of most slices and gets ‘cut
off’ from each of these slices when ignoring the input buffer’s
dependencies.
In four of the twelve occurrences of this pattern, the input was
subsequently used in sufficient decision logic to also lead to some
evidence of cluster breaking. With three of the four, this applies to
only a small number of slices. The fourth is buffr from pc2c. As
shown in the upper right chart of Fig. 11, approximately three
fifths of the large cluster is broken up by ignoring the (decision
based) dependencies of buffr.
The third pattern, lazy programmer, occurs when a single global
variable is used in place of a collection of locals. Often this pattern
is obvious from the global variable’s name (e.g., temp or xxindex).
This pattern causes needless dependence connections between
functions; thereby, linking together otherwise unrelated parts of
the program.
An example of this pattern is the global FRS (Function Return
String) from pc2c. Its dependencies’ impact on the MSG (a drop)
can be seen in the lower right of Fig. 11. Other examples include
(loop) counters and (input) file pointers.
The final pattern, library, occurs in three of the programs. It is
the only one that was always associated with at least partial break-
ing of dependence clusters in this study. This pattern is similar to
the central data structure pattern except that the data structure is
read only. Instances include
struct encoding encodings[]; from barcode,
static parserDefinition **LanguageTable; from ctags, and
char *output_format; from time.
For example, the array encodings holds a pointer to each of the
kinds of barcode that the barcode program is able to encode.
Similarly, ctags can produce tags for a variety of languages. The
selection is achieved by indexing into the global array
LangaugeTable. Finally, output_format is used by the output function
of time. The format is iterated over; thus, tying together the code
for all the various output formats.
Library variables cause dependence clusters primarily because
static analysis tools cannot determine that the particular element
chosen will not change. For example, from the static analysis point
of view, it is possible for barcode to switch encoding functions in
the middle of an encoding. This serves to link together the different
encoders into one large cluster. The (external) insight that only one
Fig. 11. Examples from the classification of MSGs.
Fig. 12. Effect of global variables with a significant impact on dependence clusters.
D. Binkley et al. / The Journal of Systems and Software 83 (2010) 96–107 103
10. Author's personal copy
encoder is ever used for any given encoding (execution of the
program) would allow a comprehension tool, for example, to break
the cluster. However, such a tool would require sophisticated do-
main knowledge, placing this beyond the abilities of current
dependence analysis technology.
8. Multiple global causes
This section considers the effects of ignoring the dependencies
associated with combinations of globals. Several of the programs
shown in Fig. 8 include such sets of multiple significant global vari-
ables. Program pc2c is used as an illustrative case study. Similar
patterns exist in compress, file_server, protest, and sudoku. The case
study considers the two significant global variables: FRS and buffr
and the ‘almost significant’ global variable buffw. The dependence
reductions achieved by ignoring various combinations are shown
on the y-axis of the chart at the top left of Fig. 13.
Next to the percent reduction bar chart, the top row of Fig. 13
shows pc2c’s original MSG and the MSG resulting from ignoring
the dependence of all three variables. The interesting thing to note
in the final MSG is the complete breaking of the clusters. The
second row shows the MSGs for buffr, FRS, and their combination,
which is interesting because it shows more than simply the
combined effect. That is, in addition to the ‘break’ and ‘drop’, it also
shows evidence of a sub-cluster break. The final three charts show
the MSG produced when ignoring dependence associated with
buffw, buffw and FRS, and buffr and buffw.
9. Threats to validity
This section considers threats to the external, internal, and
construct validity of the results presented. The main external
threat arises from the possibility that the selected programs are
not representative of programs in general, with the result that
Fig. 13. MSGs for selected global variable of pc2c showing the impact of ignoring dependences associated with combinations of global variables.
104 D. Binkley et al. / The Journal of Systems and Software 83 (2010) 96–107
11. Author's personal copy
the findings of the experiments do not apply to ‘typical’ programs.
The programs studied include a wide variety of different tasks
including, applications, utilities, games, and system code. There
is, therefore, reasonable cause for confidence in the results ob-
tained and the conclusions drawn from them. However, all of the
programs studied were C or C++ programs. Therefore, it would
be premature to infer that the results necessarily apply to other
programming languages.
Internal validity is the degree to which conclusions can be
drawn about the causal effect of the independent variable on the
dependent variable. In this experiment, the primary threat comes
from the potential for faults in the tools used to gather the data.
A mature and widely used slicing tool (CodeSurfer Grammatech
Inc. (2002)) was used to mitigate the concern.
Construct validity assesses the degree to which the variables
used in the study accurately measure the concepts they purport
to measure. Note that in the presence of human judgments, con-
struct validity is a more serious concern. In this study the only
measurement is of slice size, which can be done accurately.
10. Related work
This paper considers the role global variables play in source-level
dependence in general and dependence clusters in particular.
Though global variables have long been regarded as a potential
causes of problems (Wulf and Shaw, 1973), there has been no previ-
ous work that has provided a quantitative assessment of their impact
on dependence. Previous work on dependence clusters in software
engineering (Eisenbarth et al., 2001; Mahdavi et al., 2003; Mitchell
and Mancoridis, 2006) has focused on higher levels of abstraction,
such as models or functions. Previous work on source-level depen-
dence clusters has been primarily carried out in support of compiler
analysis where semantics preservation is a key requirement (Fischer
and LeBlanc, 1988; Jones and Muchnick, 1981; Reps, 1994).
Dependence analysis has been shown to be effective at reducing
the computational effort required to automate the test-data gener-
ation process (Harman et al., 2007). Using dependence analysis, it
is possible to reduce both the amount of code to be tested and
the size of the input domain. However, the presence of large
dependence clusters will mean that no such reduction can be
achieved when testing any part of the program that lies in a clus-
ter. Identifying and busting these clusters can therefore be thought
of as a step towards improving testability.
In software maintenance, dependence analysis is used to protect
a software maintenance engineer against the potentially unfore-
seen side effects of a maintenance change. This can be achieved
by measuring the impact of the proposed change (Black, 2001) or
by attempting to identify portions of code for which a change can
be safely performed, free from side effects (Gallagher and Lyle,
1991; Tonella, 2003). Unfortunately, all statements in a depen-
dence cluster transitively impact all other statements in the cluster.
Therefore, the ripple effect for these statements will be large and
any attempt to perform a modification will be challenging.
The transformation based approach to assessment of depen-
dence due to globals and the effects on dependence clusters is sim-
ilar to Krinke’s barrier slicing Krinke (2003) and the ‘wedge’
transformation of Lakhotia and Deprez Lakhotia and Deprez
(1998). In barrier slicing, barriers are used to prevent consideration
of any dependence past the barrier. In tucking, a wedge is inserted
into the code to ‘cap off’ dependence before the wedge so that the
code may be split out and folded into smaller, ideally more cohe-
sive, sub units.
The work reported in this paper is part of a research agenda,
currently being pursued by two of the present authors (Harman
and Binkley) and their colleagues and collaborators. For this agen-
da, dependence analysis is advocated as a way to provide tools and
techniques for empirical assessment of dependence structures. The
present paper is an invited submission to a special issue of JSS and
it was largely through this work that the invitation to submit the
present paper arose. The authors have been encouraged by the edi-
tor to include a brief overview of this previous work in this section.
To achieve this, the following paragraphs set out the results so far
established in this on-going ‘empirical dependence analysis’ re-
search agenda.
Binkley and Harman (2003b) presented the first study which
aimed to answer the question: ‘How big is a typical program slice?’.
Though slicing had been first proposed some 24 years previously
by Weiser (1979), there had not been any subsequent attempt to
systematically slice a large code base using every possible slicing
criterion, thereby providing baseline data on slice size. This paper
aimed to do just that. The paper constructed slices for 43 programs
containing just over one million lines of code. To date, this remains
the largest empirical study of slicing in the literature. It also con-
sidered the impact that calling context and structure field expan-
sion has on slice size. In order to construct the large number of
slices required, several novel slice efficiency techniques were intro-
duced (Binkley and Harman, 2003c; Binkley et al., 2007b). The pa-
per was later extended (Binkley et al., 2007a) to provide a larger
study that also considered the effects of dead code, pointer analy-
sis, and slice granularity on the size of slices produced.
Subsequently, Binkley and Harman noticed that, though for-
ward and backward slicing are dual notions of dependence, there
is an interesting difference in the distribution of size of slices. That
is, because of the duality of forward and backward dependence, the
average size of a set of forward slices of procedure or program p
will be identical to the average size of the backward slices of p.
However the distributions of these slices are very different; the for-
ward slices tend to be smaller. This was demonstrated empirically,
where it was shown that the difference in slice-size distribution is
entirely due to the effects of control dependence in structured lan-
guages (Binkley and Harman, 2005a). This realization lends to for-
ward slicing a hitherto unrealized importance, making it all the
more surprising that forward slicing has been largely overlooked
in the literature, by comparison with its much more widely-stud-
ied counterpart: backward slicing.
Though forward slices have been demonstrated to be smaller
than backward slices, the sad fact remains that all static slices, for-
ward or backward, tend to be rather large. That is, a programmer
faced with a million line program, is unlikely to be consoled by a
300,000 line slice; though the slice may be smaller than the origi-
nal program, the threshold at which comprehension support be-
comes realistic remains some way off. In order to address this
slice size problem, a new form of dependence analysis called Key
Statement Analysis was introduced (Harman et al., 2002) and re-
cently empirically studied (Binkley et al., 2008a). In Key Statement
Analysis, dependence computation is used to target those few
statements upon which the program’s dependence revolves. The
empirical findings demonstrate that key statement analysis can
be used to identify the few statements in a program that capture
most of the impact of the whole program’s dependence.
A separate study presented the effects of formal parameters and
global variables on levels of predicate dependence (Binkley and
Harman, 2003a; Binkley and Harman, 2004a). Predicate depen-
dence was considered a subject worthy of study because the pred-
icates of a program capture its essential logical intention, the
comprehension of which underpins so many software engineering
activities. The primary finding of this work was the observation
that, as the number of formal parameters available to a predicate
increases, the proportion upon which it depends tends to decrease.
This was noticed in many of the programs studied and it was a
trend that was borne out by statistical analysis. No such trend
was observed for backward slicing. This result may indicate that
D. Binkley et al. / The Journal of Systems and Software 83 (2010) 96–107 105
12. Author's personal copy
as functions increase the number of formal parameters available,
they tend to become less cohesive.
Subsequent work (Jiang et al., 2008; Tao et al., 2008), exploited
this observation regarding cohesion to develop Search Based Soft-
ware Engineering techniques for automating the process of slicing
procedures, guided by fitness functions that capture dependence
interactions.
Previous work has also considered other potential harmful ef-
fects of dependence structures that can be uncovered using static
analysis. Chief among these ‘dependence anti patterns’ (Binkley
et al., 2008b) are dependence clusters (Binkley and Harman,
2005b). This work provided a definition of several forms of anti
pattern and dependence-based techniques for locating them. The
empirical results indicated how these techniques found examples
of anti patterns in open source and production industrial code.
Other work illustrated the way in which the normally static nature
of these forms of dependence analysis could be brought to life in
animations, that provide a ‘fourth dimension’ to dependence visu-
alization (Binkley et al., 2006).
Previous work has also presented empirical results on the rela-
tionship between high level program concepts (such as credit,
undercarriage and holiday entitlement) and low level dependence
at the statement level (Binkley et al., 2008; Gold et al., 2006). This
work revealed that code which is conceptually similar also has a
tighter, more cohesive dependence structure.
In 2004, Binkley and Harman provided a detailed survey of
empirical results on program slicing (Binkley and Harman,
2004b), to which the reader is referred for a more detailed account
of related work and results concerning program slicing and pro-
gram dependence.
11. Summary and future work
This paper is concerned with the effect of global variables on pro-
gram dependence. It introduces a technique for measuring the ef-
fect of a global variable on the quantity of dependence present in
a program and uses this to study the effects of 849 global variables
from 21 programs. The results show that, while most global vari-
ables have essentially no impact on program dependence, there
are a few that have a large and significant effect. Moreover, though
there may be few such global variables, many programs (more than
half those studied) have at least one such significant variable.
The paper also studies the way in which dependencies due to
some global variables hold together large dependence clusters.
The results show that globals can be the sole cause of such clusters.
The paper presents a categorization of these effects and examines
the source code patterns behind the clusters.
The empirical results presented in the paper are findings from a
study of C and C++ programs. Future work will consider other pro-
grams and programming paradigms to assess the degree to which
these results generalize. It will also investigate the opportunities for
dependence cluster-breaking refactoring suggested by the finding that
global variables may be the sole cause of some dependence clusters.
Future work will consider the relationships between the depen-
dence clusters of a program, techniques for helping the program-
mer to break them into smaller, more manageable clusters,
empirical assessment of their effects on program comprehension
and other potential causes of dependence clusters.
References
Armitage, P., Berry, G., 1994. Statistical Methods in Medical Research. Macmillan,
London.
Balmas, F., 2002. Using dependence graphs as a support to document programs. In:
Second IEEE International Workshop on Source Code Analysis and
Manipulation. IEEE Computer Society Press, Los Alamitos, California, USA, pp.
145–154.
Barnes, J., 2003. High Integrity Software: The SPARK Approach to Safety and
Security. Addison Wesley, New York, NY.
Beck, J., Eichmann, D., 1993. Program and interface slicing for reverse engineering.
In: IEEE/ACM 15th Conference on Software Engineering (ICSE’93). IEEE
Computer Society Press, Los Alamitos, California, USA, pp. 509–518.
Binkley, D.W., 1997. Semantics guided regression test cost reduction. IEEE
Transactions on Software Engineering 23 (8), 498–516.
Binkley, D.W., Harman, M., 2003a. An empirical study of predicate dependence
levels and trends. In: 25th IEEE International Conference and Software
Engineering (ICSE 2003). IEEE Computer Society Press, Los Alamitos,
California, USA, pp. 330–339.
Binkley, D.W., Harman, M., 2003b. A large-scale empirical study of forward and
backward static slice size and context sensitivity. In: IEEE International
Conference on Software Maintenance. IEEE Computer Society Press, Los
Alamitos, California, USA, pp. 44–53.
Binkley, D.W., Harman, M., 2003c. Results from a large-scale study of performance
optimization techniques for source code analyses based on graph reachability
algorithms. In: IEEE International Workshop on Source Code Analysis and
Manipulation (SCAM 2003). IEEE Computer Society Press, Los Alamitos,
California, USA, pp. 203–212.
Binkley, D.W., Harman, M., 2004a. Analysis and visualization of predicate
dependence on formal parameters and global variables. IEEE Transactions on
Software Engineering 30 (11), 715–735.
Binkley, D.W., Harman, M., 2004b. A survey of empirical results on program slicing.
Advances in Computers 62, 105–178.
Binkley, D., Harman, M., 2005a. Forward slices are smaller than backward slices.
In: Fifth IEEE International Workshop on Source Code Analysis and
Manipulation. IEEE Computer Society Press, Los Alamitos, California, USA,
pp. 15–24.
Binkley, D., Harman, M., 2005b. Locating dependence clusters and dependence
pollution. In: 21st IEEE International Conference on Software Maintenance. IEEE
Computer Society Press, Los Alamitos, California, USA, pp. 177–186.
Binkley, D., Harman, M., Krinke, J., 2006. Animated visualisation of static analysis:
characterising, explaining and exploiting the approximate nature of static
analysis. In: Sixth International Workshop on Source Code Analysis and
Manipulation (SCAM 06), Philadelphia, Pennsylvania, USA, pp. 43–52.
Binkley, D.W., Gold, N., Harman, M., 2007a. An empirical study of static program
slice size. ACM Transactions on Software Engineering and Methodology 16 (2),
1–32.
Binkley, D.W., Harman, M., Krinke, J., 2007b. Empirical study of optimization
techniques for massive slicing. ACM Transactions on Programming Languages
and Systems 30, 3:1–3:33.
Binkley, D., Gold, N., Harman, M., Li, Z., Mahdavi, K., 2008a. Evaluating key
statements analysis, in: 8th International Working Conference on Source Code
Analysis and Manipulation (SCAM’08), IEEE Computer Society, Beijing, China,
pp. 121–130.
Binkley, D., Gold, N., Harman, M., Li, Z., Mahdavi, K., Wegener, J., 2008b. Dependence
anti patterns, in: Fourth International ERCIM Workshop on Software Evolution
and Evolvability (Evol’08), L’Aquila, Italy, pp. 25–34.
Binkley, D., Gold, N., Harman, M., Li, Z., Mahdavi, K., 2008. An empirical study of the
relationship between the concepts expressed in source code and dependence.
Journal of Systems and Software 81 (12), 2287–2298.
Black, S.E., 2001. Computing ripple effect for software maintenance. Journal of
Software Maintenance and Evolution: Research and Practice 13, 263–279.
Cimitile, A., De Lucia, A., Munro, M., 1996. A specification driven slicing process for
identifying reusable functions. Software Maintenance: Research and Practice 8,
145–178.
Deng, Y., Kothari, S., Namara, Y., 2001. Program slice browser. In: 9th IEEE
International Workshop on Program Comprehension. IEEE Computer Society
Press, Los Alamitos, California, USA, pp. 50–59.
Eisenbarth, T., Koschke, R., Simon, D., 2001. Locating features in source code, IEEE
Transactions on Software Engineering 29 (3) (special issue on ICSM 2001).
Fischer, C.N., LeBlanc, R.J., 1988. Crafting a Compiler, Benjamin/Cummings Series in
Computer Science. Benjamin/Cummings Publishing Company, Menlo Park, CA.
Fisher, D.L., 1983. Global variables versus local variables. Software – Practice and
Experience 13 (5), 467–469.
Gallagher, K.B., Lyle, J.R., 1991. Using program slicing in software maintenance. IEEE
Transactions on Software Engineering 17 (8), 751–761.
Gold, N., Harman, M., Li, Z., Mahdavi, K., 2006. An empirical study of executable
concept slice size. In: 13th Working Conference on Reverse Engineering (WCRE
06), Benevento, Italy, pp. 103–114.
Grammatech Inc., 2002. The codesurfer slicing system. www.grammatech.com.
Harman, M., Gold, N., Hierons, R.M., Binkley, D.W., 2002. Code extraction algorithms
which unify slicing and concept assignment. In: IEEE Working Conference on
Reverse Engineering (WCRE 2002). IEEE Computer Society Press, Los Alamitos,
California, USA, pp. 11–21.
Harman, M., Hu, L., Hierons, R.M., Wegener, J., Sthamer, H., Baresel, A., Roper, M.,
2004. Testability transformation. IEEE Transactions on Software Engineering 30
(1), 3–16.
Harman, M., Hassoun, Y., Lakhotia, K., McMinn, P., Wegener, J., 2007. The impact of
input domain reduction on search-based test data generation. In: ACM
Symposium on the Foundations of Software Engineering (FSE’07), Association
for Computer Machinery, Dubrovnik, Croatia, pp. 155–164.
Horwitz, S., Reps, T., Binkley, D.W., 1990. Interprocedural slicing using dependence
graphs. ACM Transactions on Programming Languages and Systems 12 (1), 26–
61.
106 D. Binkley et al. / The Journal of Systems and Software 83 (2010) 96–107
13. Author's personal copy
Jiang, T., Harman, M., Hassoun, Y., 2008. Analysis of procedure splitability, in: 15th
Working Conference on Reverse Engineering (WCRE’08), Antwerp, Belgium, pp.
247–256.
Jiang, T., Gold, N., Harman, M., Li, Z., 2008. Locating dependence structures using
search based slicing. Journal of Information and Software Technology 50 (12),
1189–1209.
Jones, N., Muchnick, S. (Eds.), 1981. Program Flow Analysis: Theory and
Applications. Prentice-Hall, Englewood Cliffs, NJ.
Krinke, J., 1998. Static slicing of threaded programs. In: ACM SIGPLAN-SIGSOFT
Workshop on Program Analysis for Software Tools and Engineering (PASTE’98),
pp. 35–42.
Krinke, J., 2003. Barrier slicing and chopping. In: IEEE International Workshop on
Source Code Analysis and Manipulation (SCAM 2003). IEEE Computer Society
Press, Los Alamitos, California, USA, pp. 81–87.
Lakhotia, A., Deprez, J.-C., 1998. Restructuring programs by tucking statements into
functions. Information and Software Technology Special Issue on Program
Slicing 40 (11–12), 677–689.
Lakhotia, A., Singh, P., 2003. Challenges in getting formal with viruses. Virus
Bulletin, 15–19.
Mahdavi, K., Harman, M., Hierons, R.M., 2003. A multiple hill climbing approach to
software module clustering. In: IEEE International Conference on Software
Maintenance. IEEE Computer Society Press, Los Alamitos, California, USA, pp.
315–324.
Marshall, L.F., Webber, J., 2000. Gotos considered harmful and other programmers
taboos. In: Blackwell, A., Bilotta E., (Eds.), 12th Psychology of Programmers
Interest Group Annual Workshop (PPIG 12), pp. 171–180.
Mitchell, B.S., Mancoridis, S., 2006. On the automatic modularization of software
systems using the bunch tool. IEEE Transactions on Software Engineering 32 (3),
193–208.
Podgurski, A., Clarke, L., 1990. A formal model of program dependences and its
implications for software testing debugging and maintenance. IEEE
Transactions on Software Engineering 16 (9), 965–979.
Reid, D., Sanders, N.R., 2007. Operations Management: An Integrated Approach,
third ed. Wiley.
Reps, T.W., 1994. Solving demand versions of interprocedural analysis problems. In:
Fritzon, P. (Ed.), Compiler Construction, Fifth International Conference of
Lecture Notes in Computer Science, vol. 786. Springer, Edinburgh, UK, pp.
389–403.
Ritchie, D., 1975. The C reference manual. cm.bell-labs.com/cm/cs/who/dmr/
cman.pdf.
Sestoft, P., 1989. Replacing function parameters by global variables, in: Fourth
International Conference on Functional Programming Languages and Computer
Architecture, Imperial College, London, IFIP and ACM, ACM Press and Addison-
Wesley, pp. 39–53.
Stroustrup, B., 2000. The C++ Programming Language, third ed. Addison-Wesley.
Sward, R.E., Chamillard, A.T., 2004. Re-engineering global variables in Ada. ACM
SIGADA Ada Letters 24 (4), 29–34.
Tennberg, P., 2002. Refactoring global objects in multithreaded applications. C/C++
Users Journal 20 (5), 20–24.
Tonella, P., 2003. Using a concept lattice of decomposition slices for program
understanding and impact analysis. IEEE Transactions on Software Engineering
29 (6), 495–509.
Weiser, M., 1979. Program slices: formal, psychological, and practical investigations
of an automatic program abstraction method. Ph.D. Thesis, University of
Michigan, Ann Arbor, MI.
Wheeler, D.A., 2005. SLOC count user’s guide. http://www.dwheeler.com/
sloccount/sloccount.html.
Wulf, W., Shaw, M., 1973. Global variables considered harmful. ACM SIGPLAN
Notices 8 (2), 28–34.
Yu, L., Schach, S.R., Chen, K., Offutt, A.J., 2004. Categorization of common coupling
and its application to the maintainability of the linux kernel. IEEE Transactions
on Software Engineering 30 (10), 694–706.
David Binkley is a Professor of Computer Science at Loyola College in Maryland
where he has worked since earning his doctorate from the University of Wisconsin
in 1991. From 1993 to 2000, Dr. Binkley was a visiting faculty researcher at the
National Institute of Standards and Technology (NIST), where his work included
participating in the Unravel program slicer project. While on leave from Loyola in
2000, he worked with Grammatech Inc. on the System Dependence Graph (SDG)
based slicer CodeSurfer and in 2008 he joined the researchers at the Crest Centre of
King’s College London to work, among other things, on dependence cluster analysis.
Dr. Binkley’s resent NSF funded research focuses continues to focus on improving
semantics-base software engineering tools. This work has recently broadened from
exclusively considering programming-language semantics to now also consider
natural-language semantics through the use of Information Retrieval techniques
applied to source code and its supporting documents. His recent work also includes
a seven school collaborative project aimed at increasing the representation of
women and minorities in computer science.
Mark Harman is professor of Software Engineering in the Department of Computer
Science at King’s College London. He is widely known for work on source code
analysis and testing and he was instrumental in the founding of the field of Search
Based Software Engineering, a field that currently has active researchers in 24
countries and for which he has given 14 keynote invited talks. Professor Harman is
the author of over 140 refereed publications, on the editorial board of 7 interna-
tional journals and has served on 90 programme committees. He is director of the
CREST centre at King’s College London, which has a current research grant portfolio
in excess of £3m. More details are available from the CREST website:
crest.dcs.kcl.ac.uk.
Youssef Hassoun is a research associate at the CREST CENTRE, Department of
Computer Science of King’s College London. His research interests include search-
based approach to Software Engineering, program analysis and testing.
Syed Islam has recently started a PhD at King’s College London, supervised by Jens
Krinke and Mark Harman. Upon completion of his undergrad from London Metro-
politan University in 2006, he started a Masters in Advanced SoftwareEngineering
at King’s College London. There, under supervision of Mark Harman he started
working on dependence clusters where he analyzed real world programs to show
the qualitative and quantitative effects of global variable on the formation of
dependence clusters. Later following completion of his masters he moved to Ban-
gladesh to join BRAC University as a lecturer where he worked till Jan ’09. He
recently moved back to London, to do his PhD and is currently working on the SLIM
(SLIcing state based Model) project at CREST (Centre for Research on Evolution
Search Testing).
Zheng Li obtained his PhD from King’s College, London CREST centre in 2009 under
the supervision of Mark Harman. He is currently a post doctoral researcher in the
CREST centre at King’s College. He was co-guest editor for two journal special issues
on software testing (one in Software Testing Verification and Reliability and one in
the Journal of Systems and Software) and is on the programme and organising
committee for the 9th IEEE International Working Conference on Source Code
Analysis and Manipulation and the organising committee of the 2nd IEEE Interna-
tional Conference on Software Testing. His interests are slicing, testing and SBSE, on
which topics he has published 8 papers, including those in Information and Soft-
ware technology, the Journal of Systems and Software, and IEEE Transactions on
Software Engineering. The paper he co-authored for Fundamental Approaches to
Software Engineering 2009 was awarded the prize for best theory paper at ETAPS.
D. Binkley et al. / The Journal of Systems and Software 83 (2010) 96–107 107