Traditional interpreted data-flow analysis is executed on whole plans; however, such whole-program psychoanalysis is not
executable for large or uncompleted plans. We suggest fragment data-flow analysis as a substitute approach which
calculates data-flow information for a particular program fragment. The psychoanalysis is parameterized by the extra
information available about the rest of the program. We depict two frameworks for interracial flow-sensitive fragment
psychoanalysis, the relationship amongst fragment psychoanalysis and whole-program analysis, and the necessities ensuring fragment analysis safety and feasibility. We suggest an application of fragment analysis as a second analysis phase after a cheap flow-insensitive whole-program analysis, in order to obtain better data for important program fragments. We also depict the design of two fragment analyses derived from an already existing whole-program flow- and context-sensitive pointer alias analysis for Computer program and present empirical rating of their cost and precision. Our experiments show evidence of dramatic improves precision gettable at a practical cost.
keywords; Data flow analysis, control dependency .
Program analysis is the method of computing properties of a program.It is useful for performing program optimiztion
The article describes 7 types of metrics and more than 50 their representatives, provides a detailed description and calculation algorithms used. It also touches upon the role of metrics in software development.
DETECTION OF CONTROL FLOW ERRORS IN PARALLEL PROGRAMS AT COMPILE TIME ijdpsjournal
This paper describes a general technique to identify control flow errors in parallel programs, which can be automated into a compiler. The compiler builds a system of linear equations that describes the global control flow of the whole program. Solving these equations using standard techniques of linear algebra
can locate a wide range of control flow bugs at compile time. This paper also describes an implementation of this control flow analysis technique in a prototype compiler for a well-known parallel programming language. In contrast to previous research in automated parallel program analysis, our technique is efficient for large programs, and does not limit the range of language features.
This presentation is a supplementary material for the following article -> Nikiforova, A., Bicevskis, J., & Karnitis, G. (2020, December). Towards a Concurrence Analysis in Business Processes. In 2020 Seventh International Conference on Social Networks Analysis, Management and Security (SNAMS) (pp. 1-6). IEEE.
This paper presents first steps towards a solution aimed to provide concurrent business processes analysis methodology for predicting the probability of incorrect business process execution. The aim of the paper is to (a) look at approaches to describing and dealing with the execution of concurrent processes, mainly focusing on the transaction mechanisms in database management systems, (b) present an idea and a preliminary version of an algorithm that detects the possibility of incorrect execution of concurrent business processes. Analyzing business process according to the proposed procedure allows to configure transaction processing optimally.
Building a new CTL model checker using Web Servicesinfopapers
Florin Stoica, Laura Stoica, Building a new CTL model checker using Web Services, Proceeding The 21th International Conference on Software, Telecommunications and Computer Networks (SoftCOM 2013), At Split-Primosten, Croatia, 18-20 September, pp. 285-289, 2013
DOI=10.1109/SoftCOM.2013.6671858 http://dx.doi.org/10.1109/SoftCOM.2013.6671858
Cinnamons are a new computation model intended to form a theoretical foundation for Control Network Programming (CNP). CNP has established itself as a programming approach combining declarative and imperative features. It supports powerful tools for control of the computation process; in particular, these tools allow easy, intuitive, visual development of heuristic, nondeterministic, or randomized solutions. The paper providesrigorous definitions of the syntax and semantics of the new model of computation, at the same time trying to keep the intuition behind clear. The purposely simplified theoretical model is then compared to both WHILE-programs (thus demonstrating its Turing completeness), and the “real” CNP. Finally, future research possibilities are mentioned that would eventually extend the cinnamon programming and its theoretical foundation into the directions of nondeterminism, randomness and fuzziness.
keywords; Data flow analysis, control dependency .
Program analysis is the method of computing properties of a program.It is useful for performing program optimiztion
The article describes 7 types of metrics and more than 50 their representatives, provides a detailed description and calculation algorithms used. It also touches upon the role of metrics in software development.
DETECTION OF CONTROL FLOW ERRORS IN PARALLEL PROGRAMS AT COMPILE TIME ijdpsjournal
This paper describes a general technique to identify control flow errors in parallel programs, which can be automated into a compiler. The compiler builds a system of linear equations that describes the global control flow of the whole program. Solving these equations using standard techniques of linear algebra
can locate a wide range of control flow bugs at compile time. This paper also describes an implementation of this control flow analysis technique in a prototype compiler for a well-known parallel programming language. In contrast to previous research in automated parallel program analysis, our technique is efficient for large programs, and does not limit the range of language features.
This presentation is a supplementary material for the following article -> Nikiforova, A., Bicevskis, J., & Karnitis, G. (2020, December). Towards a Concurrence Analysis in Business Processes. In 2020 Seventh International Conference on Social Networks Analysis, Management and Security (SNAMS) (pp. 1-6). IEEE.
This paper presents first steps towards a solution aimed to provide concurrent business processes analysis methodology for predicting the probability of incorrect business process execution. The aim of the paper is to (a) look at approaches to describing and dealing with the execution of concurrent processes, mainly focusing on the transaction mechanisms in database management systems, (b) present an idea and a preliminary version of an algorithm that detects the possibility of incorrect execution of concurrent business processes. Analyzing business process according to the proposed procedure allows to configure transaction processing optimally.
Building a new CTL model checker using Web Servicesinfopapers
Florin Stoica, Laura Stoica, Building a new CTL model checker using Web Services, Proceeding The 21th International Conference on Software, Telecommunications and Computer Networks (SoftCOM 2013), At Split-Primosten, Croatia, 18-20 September, pp. 285-289, 2013
DOI=10.1109/SoftCOM.2013.6671858 http://dx.doi.org/10.1109/SoftCOM.2013.6671858
Cinnamons are a new computation model intended to form a theoretical foundation for Control Network Programming (CNP). CNP has established itself as a programming approach combining declarative and imperative features. It supports powerful tools for control of the computation process; in particular, these tools allow easy, intuitive, visual development of heuristic, nondeterministic, or randomized solutions. The paper providesrigorous definitions of the syntax and semantics of the new model of computation, at the same time trying to keep the intuition behind clear. The purposely simplified theoretical model is then compared to both WHILE-programs (thus demonstrating its Turing completeness), and the “real” CNP. Finally, future research possibilities are mentioned that would eventually extend the cinnamon programming and its theoretical foundation into the directions of nondeterminism, randomness and fuzziness.
An optimal general type-2 fuzzy controller for Urban Traffic NetworkISA Interchange
Urban traffic network model is illustrated by state-charts and object-diagram. However, they have limitations to show the behavioral perspective of the traffic information flow. Consequently, a state space model is used to calculate the half-value waiting time of vehicles. In this study, a combination of the general type-2 fuzzy logic sets and the modified backtracking search algorithm (MBSA) techniques are used in order to control the traffic signal scheduling and phase succession so as to guarantee a smooth flow of traffic with the least wait times and average queue length. The parameters of input and output membership functions are optimized simultaneously by the novel heuristic algorithm MBSA. A comparison is made between the achieved results with those of optimal and conventional type-1 fuzzy logic controllers.
Testing the performance of the power law process model considering the use of...IJCSEA Journal
Within the class of non-homogeneous Poisson process (NHPP) models and as a result of the simplicity of
the mathematical computations of the Power Law Process (PLP) model and the attractive physical
explanation of its parameters, this model has found considerable attention in repairable systems literature.
In this article, we conduct the investigation of new estimation approach, the regression estimation
procedure, on the performance of the parametric PLP model. The regression approach for estimating the
unknown parameters of the PLP model through the mean time between failure (TBF) function is evaluated
against the maximum likelihood estimation (MLE) approach. The results from the regression and MLE
approaches are compared based on three error evaluation criteria in terms of parameter estimation and its
precision, the numerical application shows the effectiveness of the regression estimation approach at
enhancing the predictive accuracy of the TBF measure.
A Metamodel and Graphical Syntax for NS-2 ProgramingEditor IJCATR
One of the most important issues, around the world, which manufacturers pay special attention to is to promote their
activities in order to be able to give high quality products or services. Perhaps the first advice to achieve this goal is simulation idea.
Therefore, simulation software packages with different properties have been made available. One of the most applicable simulators is
NS-2 which suffers from internal complexity. On the other hand, Domain Specific Modeling Languages can make an abstraction level
by which we can overcome the complexity of NS-2, increase the production speed, and promote efficiency. So, in this paper, we
introduce a Domain Specific Metamodel for NS-2.This new metamodel paves the ways for introducing abstract syntax and modeling
language syntax. In addition, created syntax in Domain Specific Modeling Language is supported by a graphical modeling tool.
Implementing an ATL Model Checker tool using Relational Algebra conceptsinfopapers
Florin Stoica, Laura Florentina Stoica, Implementing an ATL Model Checker tool using Relational Algebra concepts, Proceeding The 22th International Conference on Software, Telecommunications and Computer Networks (SoftCOM), Split-Primosten, Croatia, 2014
An Approach for Project Scheduling Using PERT/CPM and Petri Nets (PNs) ToolsIJMER
International Journal of Modern Engineering Research (IJMER) is Peer reviewed, online Journal. It serves as an international archival forum of scholarly research related to engineering and science education.
Lecture 4 principles of parallel algorithm design updatedVajira Thambawita
The main principles of parallel algorithm design are discussed here. For more information: visit, https://sites.google.com/view/vajira-thambawita/leaning-materials
In this study, we discussed a fuzzy programming approach to bi-level linear programming problems and their application. Bi-level linear programming is characterized as mathematical programming to solve decentralized problems with two decision-makers in the hierarchal organization. They become more important for the contemporary decentralized organization where each unit seeks to optimize its own objective. In addition to this, we have considered Bi-Level Linear Programming (BLPP) and applied the Fuzzy Mathematical Programming (FMP) approach to get the solution of the system. We have suggested the FMP method for the minimization of the objectives in terms of the linear membership functions. FMP is a supervised search procedure (supervised by the upper Decision Maker (DM)). The upper-level decision-maker provides the preferred values of decision variables under his control (to enable the lower level DM to search for his optimum in a wider feasible space) and the bounds of his objective function (to direct the lower level DM to search for his solutions in the right direction).
RESPONSE SURFACE METHODOLOGY FOR PERFORMANCE ANALYSIS AND MODELING OF MANET R...IJCNCJournal
Numerous studies have analyzed the performances of routing protocols in mobile Ad-hoc networks (MANETs); most of these studies vary at most one or two parameters in experiments and do not study the interactions among these parameters. Furthermore, efficient mathematical modeling of the performances has not been investigated; such models can be useful for performance analysis, optimization, and prediction. This study aims to show the effectiveness of the response surface methodology (RSM) on the performance analysis of routing protocols in MANETs and establish a relationship between the influential parameters and these performances through mathematical modeling. Given that routing performances usually do not follow a linear pattern according to the parameters; mathematical models of factorial designs are not suitable for establishing a valid and reliable relationship between performances and parameters. Therefore, a Box–Behnken design, which is an RSM technique and provides quadratic mathematical models, is used in this study to establish a relationship. The obtained models are statistically analyzed; the models show that the studied performances accurately follow a quadratic evolution. These models provide invaluable information and can be useful in analyzing, optimizing, and predicting performances for mobile Ad-hoc routing protocols.
Scalability in Model Checking through Relational DatabasesCSCJournals
In this paper we present a new ATL model checking tool used for verification of open systems. An open system interacts with its environment and its behavior depends on the state of the system as well as the behavior of the environment. The Alternating-Time Temporal Logic (ATL) logic is interpreted over concurrent game structures, considered as natural models for compositions of open systems. In contrast to previous approaches, our tool permits an interactive design of the ATL models as state-transition graphs, and is based on client/server architecture: ATL Designer, the client tool, allows an interactive construction of the concurrent game structures as a directed multi-graphs and the ATL Checker, the core of our tool, represents the server part and is published as Web service. The ATL Checker includes an algebraic compiler which was implemented using ANTLR (Another Tool for Language Recognition). Our model checker tool allows designers to automatically verify that systems satisfy specifications expressed by ATL formulas. The original implementation of the model checking algorithm is based on Relational Algebra expressions translated into SQL queries. Several database systems were used for evaluating the system performance in verification of large ATL models.
Review of Some Checkpointing Schemes for Distributed and Mobile Computing Env...Eswar Publications
Fault Tolerance Techniques facilitate systems to carry out tasks in the incidence of faults. A checkpoint is a local state of a process saved on stable storage. In a distributed system, since the processes in the system do not share memory; a global state of the system is defined as a combination of local states, one from each process. In case of a fault in distributed systems, checkpointing enables the execution of a program to be resumed from a previous consistent global state rather than resuming the execution from the commencement. In this way, the sum of constructive processing vanished because of the fault is appreciably reduced. In this paper, we talk about various issues related to the checkpointing for distributed systems and mobile computing environments. We also confer
various types of checkpointing: coordinated checkpointing, asynchronous checkpointing, communication induced
checkpointing and message logging based checkpointing. We also present a survey of some checkpointing algorithms for distributed systems.
An optimal general type-2 fuzzy controller for Urban Traffic NetworkISA Interchange
Urban traffic network model is illustrated by state-charts and object-diagram. However, they have limitations to show the behavioral perspective of the traffic information flow. Consequently, a state space model is used to calculate the half-value waiting time of vehicles. In this study, a combination of the general type-2 fuzzy logic sets and the modified backtracking search algorithm (MBSA) techniques are used in order to control the traffic signal scheduling and phase succession so as to guarantee a smooth flow of traffic with the least wait times and average queue length. The parameters of input and output membership functions are optimized simultaneously by the novel heuristic algorithm MBSA. A comparison is made between the achieved results with those of optimal and conventional type-1 fuzzy logic controllers.
Testing the performance of the power law process model considering the use of...IJCSEA Journal
Within the class of non-homogeneous Poisson process (NHPP) models and as a result of the simplicity of
the mathematical computations of the Power Law Process (PLP) model and the attractive physical
explanation of its parameters, this model has found considerable attention in repairable systems literature.
In this article, we conduct the investigation of new estimation approach, the regression estimation
procedure, on the performance of the parametric PLP model. The regression approach for estimating the
unknown parameters of the PLP model through the mean time between failure (TBF) function is evaluated
against the maximum likelihood estimation (MLE) approach. The results from the regression and MLE
approaches are compared based on three error evaluation criteria in terms of parameter estimation and its
precision, the numerical application shows the effectiveness of the regression estimation approach at
enhancing the predictive accuracy of the TBF measure.
A Metamodel and Graphical Syntax for NS-2 ProgramingEditor IJCATR
One of the most important issues, around the world, which manufacturers pay special attention to is to promote their
activities in order to be able to give high quality products or services. Perhaps the first advice to achieve this goal is simulation idea.
Therefore, simulation software packages with different properties have been made available. One of the most applicable simulators is
NS-2 which suffers from internal complexity. On the other hand, Domain Specific Modeling Languages can make an abstraction level
by which we can overcome the complexity of NS-2, increase the production speed, and promote efficiency. So, in this paper, we
introduce a Domain Specific Metamodel for NS-2.This new metamodel paves the ways for introducing abstract syntax and modeling
language syntax. In addition, created syntax in Domain Specific Modeling Language is supported by a graphical modeling tool.
Implementing an ATL Model Checker tool using Relational Algebra conceptsinfopapers
Florin Stoica, Laura Florentina Stoica, Implementing an ATL Model Checker tool using Relational Algebra concepts, Proceeding The 22th International Conference on Software, Telecommunications and Computer Networks (SoftCOM), Split-Primosten, Croatia, 2014
An Approach for Project Scheduling Using PERT/CPM and Petri Nets (PNs) ToolsIJMER
International Journal of Modern Engineering Research (IJMER) is Peer reviewed, online Journal. It serves as an international archival forum of scholarly research related to engineering and science education.
Lecture 4 principles of parallel algorithm design updatedVajira Thambawita
The main principles of parallel algorithm design are discussed here. For more information: visit, https://sites.google.com/view/vajira-thambawita/leaning-materials
In this study, we discussed a fuzzy programming approach to bi-level linear programming problems and their application. Bi-level linear programming is characterized as mathematical programming to solve decentralized problems with two decision-makers in the hierarchal organization. They become more important for the contemporary decentralized organization where each unit seeks to optimize its own objective. In addition to this, we have considered Bi-Level Linear Programming (BLPP) and applied the Fuzzy Mathematical Programming (FMP) approach to get the solution of the system. We have suggested the FMP method for the minimization of the objectives in terms of the linear membership functions. FMP is a supervised search procedure (supervised by the upper Decision Maker (DM)). The upper-level decision-maker provides the preferred values of decision variables under his control (to enable the lower level DM to search for his optimum in a wider feasible space) and the bounds of his objective function (to direct the lower level DM to search for his solutions in the right direction).
RESPONSE SURFACE METHODOLOGY FOR PERFORMANCE ANALYSIS AND MODELING OF MANET R...IJCNCJournal
Numerous studies have analyzed the performances of routing protocols in mobile Ad-hoc networks (MANETs); most of these studies vary at most one or two parameters in experiments and do not study the interactions among these parameters. Furthermore, efficient mathematical modeling of the performances has not been investigated; such models can be useful for performance analysis, optimization, and prediction. This study aims to show the effectiveness of the response surface methodology (RSM) on the performance analysis of routing protocols in MANETs and establish a relationship between the influential parameters and these performances through mathematical modeling. Given that routing performances usually do not follow a linear pattern according to the parameters; mathematical models of factorial designs are not suitable for establishing a valid and reliable relationship between performances and parameters. Therefore, a Box–Behnken design, which is an RSM technique and provides quadratic mathematical models, is used in this study to establish a relationship. The obtained models are statistically analyzed; the models show that the studied performances accurately follow a quadratic evolution. These models provide invaluable information and can be useful in analyzing, optimizing, and predicting performances for mobile Ad-hoc routing protocols.
Scalability in Model Checking through Relational DatabasesCSCJournals
In this paper we present a new ATL model checking tool used for verification of open systems. An open system interacts with its environment and its behavior depends on the state of the system as well as the behavior of the environment. The Alternating-Time Temporal Logic (ATL) logic is interpreted over concurrent game structures, considered as natural models for compositions of open systems. In contrast to previous approaches, our tool permits an interactive design of the ATL models as state-transition graphs, and is based on client/server architecture: ATL Designer, the client tool, allows an interactive construction of the concurrent game structures as a directed multi-graphs and the ATL Checker, the core of our tool, represents the server part and is published as Web service. The ATL Checker includes an algebraic compiler which was implemented using ANTLR (Another Tool for Language Recognition). Our model checker tool allows designers to automatically verify that systems satisfy specifications expressed by ATL formulas. The original implementation of the model checking algorithm is based on Relational Algebra expressions translated into SQL queries. Several database systems were used for evaluating the system performance in verification of large ATL models.
Review of Some Checkpointing Schemes for Distributed and Mobile Computing Env...Eswar Publications
Fault Tolerance Techniques facilitate systems to carry out tasks in the incidence of faults. A checkpoint is a local state of a process saved on stable storage. In a distributed system, since the processes in the system do not share memory; a global state of the system is defined as a combination of local states, one from each process. In case of a fault in distributed systems, checkpointing enables the execution of a program to be resumed from a previous consistent global state rather than resuming the execution from the commencement. In this way, the sum of constructive processing vanished because of the fault is appreciably reduced. In this paper, we talk about various issues related to the checkpointing for distributed systems and mobile computing environments. We also confer
various types of checkpointing: coordinated checkpointing, asynchronous checkpointing, communication induced
checkpointing and message logging based checkpointing. We also present a survey of some checkpointing algorithms for distributed systems.
A Survey of Various Fault Tolerance Checkpointing Algorithms in Distributed S...Eswar Publications
A distributed system is a collection of independent entities that cooperate to solve a problem that cannot be individually solved. Checkpoint is defined as a fault tolerant technique. It is a save state of a process during the failure-free execution, enabling it to restart from this checkpointed state upon a failure to reduce the amount of lost work instead of repeating the computation from beginning. The process of restoring form previous checkpointed state is known as rollback recovery. A checkpoint can be saved on either the stable storage or the
volatile storage depending on the failure scenarios to be tolerated. Checkpointing is major challenge in mobile ad
hoc network. The mobile ad hoc network architecture is one consisting of a set of self configure mobile hosts(MH) capable of communicating with each other without the assistance of base stations, some of processes running on mobile host. The main issues of this environment are insufficient power and limited storage capacity. This paper surveys the algorithms which have been reported in the literature for checkpointing in distributed systems as well as Mobile Distributed systems.
The Concept of Authenticity in Philosophy of Sartre and Implications for Usin...Eswar Publications
The aim of this paper is explaining authenticity in Sartre philosophy and it’s relation to internet as an educational technology. Initially, using deceptive and analytic method, the authenticity has been explained in Sartre philosophy. Sartre. He believed that authenticity mean being honest to yourself, having freedom, take responsibility of freedom and respect other’s freedom. In analyzing these conceptions in relation to internet, we can say internet enhances ability of choosing and freedom. On the other hand because of lacing face to face
communication and being anonymous on the internet, it will result to decreasing responsibility and commitment. Existence anguish that is believed to be a positive quality in Sartre’s view, can motivate thought and action.
Forestalling Meticulous Jam Attacks Using Packet-Hiding TechniquesEswar Publications
The open nature of the wireless medium leaves it liable to intentional interference attacks, generally said as jam.
This intentional interference with wireless transmissions is used as a launch pad for mounting Denial-of-Service attacks on wireless networks. Typically, jam has been self-addressed beneath associate external threat model.
However, adversaries with internal information of protocol specifications and network secrets will launch loweffort
jam attacks that are troublesome to notice and counter. during this work, we have a tendency to address the matter of jamming attacks in wireless networks. In these attacks, the resister is active just for a brief amount of your time, by selection targeting messages of high importance. In our work two offender nodes (node that creates jamming) and introduce one new node i.e sender node. The new node(jammer node) is at intervals the twenty five nodes. Victimization that new sender node we have to eliminate the offender nodes absolutely. We have a tendency to conclude that however jam happens within the network and approach of elimination of the offender nodes
victimization new sender node. We propose mistrial approach for avoid flooding packets in jammer network. We conclude the performance between the mistrial and damping approach for avoid jamming packets We have a tendency to illustrate the benefits of {selective jam|spot-jamming|jamming|electronic jamming|jam} in terms of network performance degradation and resister effort by to beat the sender in network with the assistance of recent jamming node. We illustrate the benefits of jam|spot-jamming|jamming|electronic-jamming|jam} in terms of network performance degradation and human effort by to beat the sender in network with the assistance of recent
jamming node.
Contains information about a career development and explains the steps in the career development process. The employees' and employers' roles in career development process are discussed.
Scheduling Using Multi Objective Genetic Algorithmiosrjce
IOSR Journal of Computer Engineering (IOSR-JCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of computer engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in computer technology. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
Performance Analysis of Parallel Algorithms on Multi-core System using OpenMP IJCSEIT Journal
The current multi-core architectures have become popular due to performance, and efficient processing of
multiple tasks simultaneously. Today’s the parallel algorithms are focusing on multi-core systems. The
design of parallel algorithm and performance measurement is the major issue on multi-core environment. If
one wishes to execute a single application faster, then the application must be divided into subtask or
threads to deliver desired result. Numerical problems, especially the solution of linear system of equation
have many applications in science and engineering. This paper describes and analyzes the parallel
algorithms for computing the solution of dense system of linear equations, and to approximately compute
the value of π using OpenMP interface. The performances (speedup) of parallel algorithms on multi-core
system have been presented. The experimental results on a multi-core processor show that the proposed
parallel algorithms achieves good performance (speedup) compared to the sequential
In the present paper, applicability and
capability of A.I techniques for effort estimation prediction has
been investigated. It is seen that neuro fuzzy models are very
robust, characterized by fast computation, capable of handling
the distorted data. Due to the presence of data non-linearity, it is
an efficient quantitative tool to predict effort estimation. The one
hidden layer network has been developed named as OHLANFIS
using MATLAB simulation environment.
Here the initial parameters of the OHLANFIS are
identified using the subtractive clustering method. Parameters of
the Gaussian membership function are optimally determined
using the hybrid learning algorithm. From the analysis it is seen
that the Effort Estimation prediction model developed using
OHLANFIS technique has been able to perform well over normal
ANFIS Model.
program partitioning and scheduling IN Advanced Computer ArchitecturePankaj Kumar Jain
Advanced Computer Architecture,Program Partitioning and Scheduling,Program Partitioning & Scheduling,Latency,Levels of Parallelism,Loop-level Parallelism,Subprogram-level Parallelism,Job or Program-Level Parallelism,Communication Latency,Grain Packing and Scheduling,Program Graphs and Packing
Problems in Task Scheduling in Multiprocessor Systemijtsrd
This Contemporary computer systems are multiprocessor or multicomputer machines. Their efficiency depends on good methods of administering the executed works. Fast processing of a parallel application is possible only when its parts are appropriately ordered in time and space. This calls for efficient scheduling policies in parallel computer systems. In this work deterministic problems of scheduling are considered. The classical scheduling theory assumed that the application in any moment of time is executed by only one processor. This assumption has been weakened recently, especially in the context of parallel and distributed computer systems. This monograph is devoted to problems of deterministic scheduling applications (or tasks according to the scheduling terminology) requiring more than one processor simultaneously. We name such applications multiprocessor tasks. In this work the complexity of open multiprocessor task scheduling problems has been established. Algorithms for scheduling multiprocessor tasks on parallel and dedicated processors are proposed. For a special case of applications with regular structure which allow for dividing it into parts of arbitrary size processed independently in parallel, a method of finding optimal scattering of work in a distributed computer system is proposed. The applications with such regular characteristics are called divisible tasks. The concept of a divisible task enables creation of tractable computation models in a wide class of computer architectures such as chains, stars, meshes, hypercubes, multistage networks. Divisible task method gives rise to the evaluation of computer system performance. Examples of such performance evaluation are presented. This work summarizes earlier works of the author as well as contains new original results. Mukul Varshney | Jyotsna | Abhakiran Rajpoot | Shivani Garg"Problems in Task Scheduling in Multiprocessor System" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-1 | Issue-4 , June 2017, URL: http://www.ijtsrd.com/papers/ijtsrd2198.pdf http://www.ijtsrd.com/computer-science/computer-architecture/2198/problems-in-task-scheduling-in-multiprocessor-system/mukul-varshney
Assessing Software Reliability Using SPC – An Order Statistics Approach IJCSEA Journal
There are many software reliability models that are based on the times of occurrences of errors in the debugging of software. It is shown that it is possible to do asymptotic likelihood inference for software reliability models based on order statistics or Non-Homogeneous Poisson Processes (NHPP), with asymptotic confidence levels for interval estimates of parameters. In particular, interval estimates from these models are obtained for the conditional failure rate of the software, given the data from the debugging process. The data can be grouped or ungrouped. For someone making a decision about when to market software, the conditional failure rate is an important parameter. Order statistics are used in a wide variety of practical situations. Their use in characterization problems, detection of outliers, linear estimation, study of system reliability, life-testing, survival analysis, data compression and many other fields can be seen from the many books. Statistical Process Control (SPC) can monitor the forecasting of software failure and thereby contribute significantly to the improvement of software reliability. Control charts are widely used for software process control in the software industry. In this paper we proposed a control mechanism based on order statistics of cumulative quantity between observations of time domain
failure data using mean value function of Half Logistics Distribution (HLD) based on NHPP.
Assessing Software Reliability Using SPC – An Order Statistics ApproachIJCSEA Journal
There are many software reliability models that are based on the times of occurrences of errors in the debugging of software. It is shown that it is possible to do asymptotic likelihood inference for software reliability models based on order statistics or Non-Homogeneous Poisson Processes (NHPP), with asymptotic confidence levels for interval estimates of parameters. In particular, interval estimates from these models are obtained for the conditional failure rate of the software, given the data from the debugging process. The data can be grouped or ungrouped. For someone making a decision about when to market software, the conditional failure rate is an important parameter. Order statistics are used in a wide variety of practical situations. Their use in characterization problems, detection of outliers, linear estimation, study of system reliability, life-testing, survival analysis, data compression and many other fields can be seen from the many books. Statistical Process Control (SPC) can monitor the forecasting of software failure and thereby contribute significantly to the improvement of software reliability. Control charts are widely used for software process control in the software industry. In this paper we proposed a control mechanism based on order statistics of cumulative quantity between observations of time domain
failure data using mean value function of Half Logistics Distribution (HLD) based on NHPP.
Pareto Type II Based Software Reliability Growth ModelWaqas Tariq
The past 4 decades have seen the formulation of several software reliability growth models to predict the reliability and error content of software systems. This paper presents Pareto type II model as a software reliability growth model, together with expressions for various reliability performance measures. Theory of probability, distribution function, probability distributions plays major role in software reliability model building. This paper presents estimation procedures to access reliability of a software system using Pareto distribution, which is based on Non Homogenous Poisson Process (NHPP).
Content-Based Image Retrieval (CBIR) systems have been used for the searching of relevant images in various research areas. In CBIR systems features such as shape, texture and color are used. The extraction of features is the main step on which the retrieval results depend. Color features in CBIR are used as in the color histogram, color moments, conventional color correlogram and color histogram. Color space selection is used to represent the information of color of the pixels of the query image. The shape is the basic characteristic of segmented regions of an image. Different methods are introduced for better retrieval using different shape representation techniques; earlier the global shape representations were used but with time moved towards local shape representations. The local shape is more related to the expressing of result instead of the method. Local shape features may be derived from the texture properties and the color derivatives. Texture features have been used for images of documents, segmentation-based recognition,and satellite images. Texture features are used in different CBIR systems along with color, shape, geometrical structure and sift features.
The cyber attacks have become most prevalent in the past few years. During this time, attackers have discovered new vulnerabilities to carry out malicious activities on the internet. Both the clients and the servers have been victimized by the attackers. Clickjacking is one of the attacks that have been adopted by the attackers to deceive the innocuous internet users to initiate some action. Clickjacking attack exploits one of the vulnerabilities existing in the web applications. This attack uses a technique that allows cross domain attacks with the help of userinitiated clicks and performs unintended actions. This paper traces out the vulnerabilities that make a website vulnerable to clickjacking attack and proposes a solution for the same.
Performance Analysis of Audio and Video Synchronization using Spreaded Code D...Eswar Publications
The audio and video synchronization plays an important role in speech recognition and multimedia communication. The audio-video sync is a quite significant problem in live video conferencing. It is due to use of various hardware components which introduces variable delay and software environments. The objective of the synchronization is used to preserve the temporal alignment between the audio and video signals. This paper proposes the audio-video synchronization using spreading codes delay measurement technique. The performance of the proposed method made on home database and achieves 99% synchronization efficiency. The audio-visual
signature technique provides a significant reduction in audio-video sync problems and the performance analysis of audio and video synchronization in an effective way. This paper also implements an audio- video synchronizer and analyses its performance in an efficient manner by synchronization efficiency, audio-video time drift and audio-video delay parameters. The simulation result is carried out using mat lab simulation tools and simulink. It is automatically estimating and correcting the timing relationship between the audio and video signals and maintaining the Quality of Service.
Due to the availability of complicated devices in industry, models for consumers at lower cost of resources are developed. Home Automation systems have been developed by several researchers. The limitations of home automation includes complexity in architecture, higher costs of the equipment, interface inflexibility. In this paper as we have proposed, the working protocol of PIC 16F72 technology is which is secure, cost efficient, flexible that leads to the development of efficient home automation systems. The system is operational to control various home appliances like fans, Bulbs, Tube light. The following paper describes about components used and working of all components connected. The home automation system makes use of Android app entitled “Home App” which gives
flexibility and easy to use GUI.
Semantically Enchanced Personalised Adaptive E-Learning for General and Dysle...Eswar Publications
E-learning plays an important role in providing required and well formed knowledge to a learner. The medium of e- learning has achieved advancement in various fields such as adaptive e-learning systems. The need for enhancing e-learning semantically can enhance the retrieval and adaptability of the learning curriculum. This paper provides a semantically enhanced module based e-learning for computer science programme on a learnercentric perspective. The learners are categorized based on their proficiency for providing personalized learning environment for users. Learning disorders on the platform of e-learning still require lots of research. Therefore, this paper also provides a personalized assessment theoretical model for alphabet learning with learning objects for
children’s who face dyslexia.
Agriculture plays an important role in the economy of our country. Over 58 percent of the rural households depend on the agriculture sector as their means of livelihood. Agriculture is one of the major contributors to Gross Domestic Product(GDP). Seeds are the soul of agriculture. This application helps in reducing the time for the researchers as well as farmers to know the seedling parameters. The application helps the farmers to know about the percentage of seedlings that will grow and it is very essential in estimating the yield of that particular crop. Manual calculation may lead to some error, to minimize that error, the developed app is used. The scientist and farmers require the app to know about the physiological seed quality parameters and to take decisions regarding their farming activities. In this article a desktop app for seed germination percentage and vigour index calculation are developed in PHP scripting language.
What happens when adaptive video streaming players compete in time-varying ba...Eswar Publications
Competition among adaptive video streaming players severely diminishes user-QoE. When players compete at a bottleneck link many do not obtain adequate resources. This imbalance eventually causes ill effects such as screen flickering and video stalling. There have been many attempts in recent years to overcome some of these problems. However, added to the competition at the bottleneck link there is also the possibility of varying network bandwidth which can make the situation even worse. This work focuses on such a situation. It evaluates current heuristic adaptive video players at a bottleneck link with time-varying bandwidth conditions. Experimental setup includes the TAPAS player and emulated network conditions. The results show PANDA outperforms FESTIVE, ELASTIC and the Conventional players.
WLI-FCM and Artificial Neural Network Based Cloud Intrusion Detection SystemEswar Publications
Security and Performance aspects of cloud computing are the major issues which have to be tended to in Cloud Computing. Intrusion is one such basic and imperative security problem for Cloud Computing. Consequently, it is essential to create an Intrusion Detection System (IDS) to detect both inside and outside assaults with high detection precision in cloud environment. In this paper, cloud intrusion detection system at hypervisor layer is developed and assesses to detect the depraved activities in cloud computing environment. The cloud intrusion detection system uses a hybrid algorithm which is a fusion of WLI- FCM clustering algorithm and Back propagation artificial Neural Network to improve the detection accuracy of the cloud intrusion detection system. The proposed system is implemented and compared with K-means and classic FCM. The DARPA’s KDD cup dataset 1999 is used for simulation. From the detailed performance analysis, it is clear that the proposed system is able to detect the anomalies with high detection accuracy and low false alarm rate.
Spreading Trade Union Activities through Cyberspace: A Case StudyEswar Publications
This report present the outcome of an investigative research conducted to examine the modu-operandi of academic staff union of polytechnics (ASUP) YabaTech. The investigation covered the logistics and cost implication for spreading union activities among members. It was discovered that cost of management and dissemination of information to members was at high side, also logistics problem constitutes to loss of information in transit hence cut away some members from union activities. To curtail the problem identified, we proposed the
design of secure and dynamic website for spreading union activities among members and public. The proposed system was implemented using HTML5 technology, interface frameworks like Bootstrap and j query which enables the responsive feature of the application interface. The backend was designed using PHPMYSQL. It was discovered from the evaluation of the new system that cost of managing information has reduced considerably, and logistic problems identified in the old system has become a forgotten issue.
Identifying an Appropriate Model for Information Systems Integration in the O...Eswar Publications
Nowadays organizations are using information systems for optimizing processes in order to increase coordination and interoperability across the organizations. Since Oil and Gas Industry is one of the large industries in whole of the world, there is a need to compatibility of its Information Systems (IS) which consists three categories of systems: Field IS, Plant IS and Enterprise IS to create interoperability and approach the
optimizing processes as its result. In this paper we introduce the different models of information systems integration, identify the types of information systems that are using in the upstream and downstream sectors of petroleum industry, and finally based on expert’s opinions will identify a suitable model for information systems integration in this industry.
Link-and Node-Disjoint Evaluation of the Ad Hoc on Demand Multi-path Distance...Eswar Publications
This work illustrates the AOMDV routing protocol. Its ancestor, the AODV routing protocol is also described. This tutorial demonstrates how forward and reverse paths are created by the AOMDV routing protocol. Loop free paths formulation is described, together with node and link disjoint paths. Finally, the performance of the AOMDV routing protocol is investigated along link and node disjoint paths. The WSN with the AOMDV routing protocol using link disjoint paths is better than the WSN with the AOMDV routing protocol using node disjoint paths for energy consumption.
Bridging Centrality: Identifying Bridging Nodes in Transportation NetworkEswar Publications
To identify the importance of node of a network, several centralities are used. Majority of these centrality measures are dominated by components' degree due to their nature of looking at networks’ topology. We propose a centrality to identification model, bridging centrality, based on information flow and topological aspects. We apply bridging centrality on real world networks including the transportation network and show that the nodes distinguished by bridging centrality are well located on the connecting positions between highly connected regions. Bridging centrality can discriminate bridging nodes, the nodes with more information flowed through them and locations between highly connected regions, while other centrality measures cannot.
Now a days we are living in an era of Information Technology where each and every person has to become IT incumbent either intentionally or unintentionally. Technology plays a vital role in our day to day life since last few decades and somehow we all are depending on it in order to obtain maximum benefit and comfort. This new era equipped with latest advents of technology, enlightening world in the form of Internet of Things (IoT). Internet of things is such a specified and dignified domain which leads us to the real world scenarios where each object can perform some task while communicating with some other objects. The world with full of devices, sensors and other objects which will communicate and make human life far better and easier than ever. This paper provides an overview of current research work on IoT in terms of architecture, a technology used and applications. It also highlights all the issues related to technologies used for IoT, after the literature review of research work. The main purpose of this survey is to provide all the latest technologies, their corresponding
trends and details in the field of IoT in systematic manner. It will be helpful for further research.
Automatic Monitoring of Soil Moisture and Controlling of Irrigation SystemEswar Publications
In past couple of decades, there is immediate growth in field of agricultural technology. Utilization of proper method of irrigation by drip is very reasonable and proficient. A various drip irrigation methods have been proposed, but they have been found to be very luxurious and dense to use. The farmer has to maintain watch on irrigation schedule in the conventional drip irrigation system, which is different for different types of crops. In remotely monitored embedded system for irrigation purposes have become a new essential for farmer to accumulate his energy, time and money and will take place only when there will be requirement of water. In this approach, the soil test for chemical constituents, water content, and salinity and fertilizer requirement data collected by wireless and processed for better drip irrigation plan. This paper reviews different monitoring systems and proposes an automatic monitoring system model using Wireless Sensor Network (WSN) which helps the farmer to improve the yield.
Multi- Level Data Security Model for Big Data on Public Cloud: A New ModelEswar Publications
With the advent of cloud computing the big data has emerged as a very crucial technology. The certain type of cloud provides the consumers with the free services like storage, computational power etc. This paper is intended to make use of infrastructure as a service where the storage service from the public cloud providers is going to leveraged by an individual or organization. The paper will emphasize the model which can be used by anyone without any cost. They can store the confidential data without any type of security issue, as the data will be altered
in such a way that it cannot be understood by the intruder if any. Not only that but the user can retrieve back the original data within no time. The proposed security model is going to effectively and efficiently provide a robust security while data is on cloud infrastructure as well as when data is getting migrated towards cloud infrastructure or vice versa.
Impact of Technology on E-Banking; Cameroon PerspectivesEswar Publications
The financial services industry is experiencing rapid changes in services delivery and channels usage, and financial companies and users of financial services are looking at new technologies as they emerge and deciding whether or not to embrace them and the new opportunities to save and manage enormous time, cost and stress.
There is no doubt about the favourable and manifold impact of technology on e-banking as pictured in this review paper, almost all banks are with the least and most access e-banking Technological equipments like ATMs and Cards. On the other Hand cheap and readily available technology has opened a favourable competition in ebanking services business with a lot of wide range competitors competing with Commercial Banks in Cameroon in providing digital financial services.
Classification Algorithms with Attribute Selection: an evaluation study using...Eswar Publications
Attribute or feature selection plays an important role in the process of data mining. In general the data set contains more number of attributes. But in the process of effective classification not all attributes are relevant.
Attribute selection is a technique used to extract the ranking of attributes. Therefore, this paper presents a comparative evaluation study of classification algorithms before and after attribute selection using Waikato Environment for Knowledge Analysis (WEKA). The evaluation study concludes that the performance metrics of the classification algorithm, improves after performing attribute selection. This will reduce the work of processing irrelevant attributes.
Mining Frequent Patterns and Associations from the Smart meters using Bayesia...Eswar Publications
In today’s world migration of people from rural areas to urban areas is quite common. Health care services are one of the most challenging aspect that is must require to the people with abnormal health. Advancements in the technologies lead to build the smart homes, which contains various sensor or smart meter devices to automate the process of other electronic device. Additionally these smart meters can be able to capture the daily activities of the patients and also monitor the health conditions of the patients by mining the frequent patterns and
association rules generated from the smart meters. In this work we proposed a model that is able to monitor the activities of the patients in home and can send the daily activities to the corresponding doctor. We can extract the frequent patterns and association rules from the log data and can predict the health conditions of the patients and can give the suggestions according to the prediction. Our work is divided in to three stages. Firstly, we used to record the daily activities of the patient using a specific time period at three regular intervals. Secondly we applied the frequent pattern growth for extracting the association rules from the log file. Finally, we applied k means clustering for the input and applied Bayesian network model to predict the health behavior of the patient and precautions will be given accordingly.
Network as a Service Model in Cloud Authentication by HMAC AlgorithmEswar Publications
Resource pooling on internet-based accessing on use as pay environmental technology and ruled in IT field is the
cloud. Present, in every organization has trusted the web, however, the information must flow but not hold the
data. Therefore, all customers have to use the cloud. While the cloud progressing info by securing-protocols. Third
party observing and certain circumstances directly stale in flow and kept of packets in the virtual private cloud.
Global security statistics in the year 2017, hacking sensitive information in cloud approximately maybe 75.35%,
and the world security analyzer said this calculation maybe reached to 100%. For this cause, this proposed
research work concentrates on Authentication-Message-Digest-Key with authentication in routing the Network as
a Service of packets in OSPF (Open Shortest Path First) implementing Cloud with GNS3 has tested them to
securing from attackers.
Microstrip patch antennas are recently used in wireless detection applications due to their low power consumption, low cost, versatility, field excitation, ease of fabrication etc. The microstrip patch antennas are also called as printed antennas which is suffer with an array elements of antenna and narrow bandwidth. To overcome the above drawbacks, Flame Retardant Material is used as the substrate. Rectangular shape of microstrip patch antenna with FR4 material as the substrate which is more suitable for the explosive detection applications. The proposed printed antenna was designed with the dimension of 60 x 60 mm2. FR-4 material has a dielectric constant value of 4.3 with thickness 1.56 mm, length and width 60 mm and 60 mm respectively. One side of the substrate contains the ground plane of dimensions 60 x60 mm2 made of copper and the other side of the substrate contains the patch which have dimensions 34 x 29 mm2 and thickness 0.03mm which is also made of copper. RMPA without slot, Vertical slot RMPA, Double horizontal slot RMPA and Centre slot RMPA structures were
designed and the performance of the antennas were analysed with various parameters such as gain, directivity, Efield, VSWR and return loss. From the performance analysis, double horizontal slot RMPA antenna provides a better result and it provides maximum gain (8.61dB) and minimum return loss (-33.918dB). Based on the E-field excitation value the SEMTEX explosive material is detected and it was simulated using CST software.
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
Le nuove frontiere dell'AI nell'RPA con UiPath Autopilot™UiPathCommunity
In questo evento online gratuito, organizzato dalla Community Italiana di UiPath, potrai esplorare le nuove funzionalità di Autopilot, il tool che integra l'Intelligenza Artificiale nei processi di sviluppo e utilizzo delle Automazioni.
📕 Vedremo insieme alcuni esempi dell'utilizzo di Autopilot in diversi tool della Suite UiPath:
Autopilot per Studio Web
Autopilot per Studio
Autopilot per Apps
Clipboard AI
GenAI applicata alla Document Understanding
👨🏫👨💻 Speakers:
Stefano Negro, UiPath MVPx3, RPA Tech Lead @ BSP Consultant
Flavio Martinelli, UiPath MVP 2023, Technical Account Manager @UiPath
Andrei Tasca, RPA Solutions Team Lead @NTT Data
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
A tale of scale & speed: How the US Navy is enabling software delivery from l...sonjaschweigert1
Rapid and secure feature delivery is a goal across every application team and every branch of the DoD. The Navy’s DevSecOps platform, Party Barge, has achieved:
- Reduction in onboarding time from 5 weeks to 1 day
- Improved developer experience and productivity through actionable findings and reduction of false positives
- Maintenance of superior security standards and inherent policy enforcement with Authorization to Operate (ATO)
Development teams can ship efficiently and ensure applications are cyber ready for Navy Authorizing Officials (AOs). In this webinar, Sigma Defense and Anchore will give attendees a look behind the scenes and demo secure pipeline automation and security artifacts that speed up application ATO and time to production.
We will cover:
- How to remove silos in DevSecOps
- How to build efficient development pipeline roles and component templates
- How to deliver security artifacts that matter for ATO’s (SBOMs, vulnerability reports, and policy evidence)
- How to streamline operations with automated policy checks on container images
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
Welocme to ViralQR, your best QR code generator.ViralQR
Welcome to ViralQR, your best QR code generator available on the market!
At ViralQR, we design static and dynamic QR codes. Our mission is to make business operations easier and customer engagement more powerful through the use of QR technology. Be it a small-scale business or a huge enterprise, our easy-to-use platform provides multiple choices that can be tailored according to your company's branding and marketing strategies.
Our Vision
We are here to make the process of creating QR codes easy and smooth, thus enhancing customer interaction and making business more fluid. We very strongly believe in the ability of QR codes to change the world for businesses in their interaction with customers and are set on making that technology accessible and usable far and wide.
Our Achievements
Ever since its inception, we have successfully served many clients by offering QR codes in their marketing, service delivery, and collection of feedback across various industries. Our platform has been recognized for its ease of use and amazing features, which helped a business to make QR codes.
Our Services
At ViralQR, here is a comprehensive suite of services that caters to your very needs:
Static QR Codes: Create free static QR codes. These QR codes are able to store significant information such as URLs, vCards, plain text, emails and SMS, Wi-Fi credentials, and Bitcoin addresses.
Dynamic QR codes: These also have all the advanced features but are subscription-based. They can directly link to PDF files, images, micro-landing pages, social accounts, review forms, business pages, and applications. In addition, they can be branded with CTAs, frames, patterns, colors, and logos to enhance your branding.
Pricing and Packages
Additionally, there is a 14-day free offer to ViralQR, which is an exceptional opportunity for new users to take a feel of this platform. One can easily subscribe from there and experience the full dynamic of using QR codes. The subscription plans are not only meant for business; they are priced very flexibly so that literally every business could afford to benefit from our service.
Why choose us?
ViralQR will provide services for marketing, advertising, catering, retail, and the like. The QR codes can be posted on fliers, packaging, merchandise, and banners, as well as to substitute for cash and cards in a restaurant or coffee shop. With QR codes integrated into your business, improve customer engagement and streamline operations.
Comprehensive Analytics
Subscribers of ViralQR receive detailed analytics and tracking tools in light of having a view of the core values of QR code performance. Our analytics dashboard shows aggregate views and unique views, as well as detailed information about each impression, including time, device, browser, and estimated location by city and country.
So, thank you for choosing ViralQR; we have an offer of nothing but the best in terms of QR code services to meet business diversity!
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
State of ICS and IoT Cyber Threat Landscape Report 2024 preview
Information-Flow Analysis of Design Breaks up
1. Int. J. Advanced Networking and Applications
Volume: 6 Issue: 3 Pages: 2319-2328 (2014) ISSN : 0975-0290
2319
Information-Flow Analysis of Design
Breaks up
Rajendra kumar
Research Scholar-Department of Computer Science, GITM Lucknow, Uttar Pradesh technical University, Lucknow.
Email: rajendra888999@gmail.com
Anil Kumar
Research Scholar-Department of Computer Science, GITM Lucknow, Uttar Pradesh technical University, Lucknow.
Email: anilgangwar15@gmail.com
Namrata Dhanda
Assott. Professor-Department of Computer Science, GITM Lucknow, Uttar Pradesh technical University, Lucknow.
Email: ndhanda510@gmail.com
---------------------------------------------------------------------ABSTRACT-----------------------------------------------------------------
Traditional interpreted data-flow analysis is executed on whole plans; however, such whole-program psychoanalysis is not
executable for large or uncompleted plans. We suggest fragment data-flow analysis as a substitute approach which
calculates data-flow information for a particular program fragment. The psychoanalysis is parameterized by the extra
information available about the rest of the program. We depict two frameworks for interracial flow-sensitive fragment
psychoanalysis, the relationship amongst fragment psychoanalysis and whole-program analysis, and the necessities ensuring
fragment analysis safety and feasibility. We suggest an application of fragment analysis as a second analysis phase after a
cheap flow-insensitive whole-program analysis, in order to obtain better data for important program fragments. We also
depict the design of two fragment analyses derived from an already existing whole-program flow- and context-sensitive
pointer alias analysis for Computer program and present empirical rating of their cost and precision. Our experiments show
evidence of dramatic improves precision gettable at a practical cost.
-------------------------------------------------------------------------------------------------------------------------------------------------------
Date of Submission: February 11, 2014 Date of Acceptance: March 12, 2014
-------------------------------------------------------------------------------------------------------------------------------------------------------
1 Introduction
Many stages of the software development cycle require
information about the properties of large and complex
programs. Data-flow analysis extracts semantic information
which can be used for code optimization, program slicing,
semantic change analysis, program restructuring, and code
testing. In many cases, interprocedural data-flow analysis is
needed to obtain information about program properties that
depend on the interaction between different procedures.
Flow insensitive
Analysis dismisses the governing of arguments and
computes one answer for the whole program; in contrast,
flow-sensitive depth psychology follows the control flow
order of statements and computes different solutions at
distinct program points. Context-sensitive analysis
conceives (sometimes approximately) only paths along
which calls and returns are decently matched, while context-
insensitive psychoanalysis does not make this eminence.
Traditionally, inter procedural data-flow
psychoanalysis is planned to examine whole programs;
however, in many cases such whole-program analysis is
impracticable. For This research was supported, in part, by
NSF grants CCR-9501761 and CCR- 9804065, and Edison
Design Group. Very large programs with hundreds of
thousands or even millions lines of code, the time
commanded to build a whole-program representation and
the space demanded to store it are prohibitive [1]. In many
cases, the plans are incomplete the source code for
components of the program (e.g., libraries) is not
uncommitted. Empirical evidence proposes that precise
interprocedural flow-sensitive analysis does not scale for
large programs [18]. In approximately cases the analysis
results are not needed for the whole program, but only for a
relatively small part of it | for example, a maintenance task
for a specific program fragment may only expect data-flow
information for curriculum directs inside the break up, both
before and after the sustentation change. This paper suggests
an alternative approach for plan analysis. Instead of
addressing the problem of calculating data-flow information
for the whole program, we deal the trouble of calculating
data-flow information for a particular program break up.
The problem is parameterized by the extra information
available about the rest of the plan. Such fragment data-flow
psychoanalysis can avoid the troubles of the conventional
whole-program psychoanalysis. For example, data can be
found about fragments of very large plans for which whole-
program analysis is prohibitively expensive. Likewise,
analysis can be executed on fragmentizes of incomplete
programs; for such programs, traditional whole-program
analysis is not possible. Eventually, fragment analysis
calculates data about only the “interesting portion" of the
curriculum, which can be significantly smaller than the
program itself. This paper is a first step in investigation the
2. Int. J. Advanced Networking and Applications
Volume: 6 Issue: 3 Pages: 2319-2328 (2014) ISSN : 0975-0290
2320
theory and practice of fragment data-flow psychoanalysis. It
only conceives flow-sensitive fragment analysis. The main
parts of this work can be summarized as follows:
-We describe two frameworks for interprocedural flow-
sensitive fragment analysis, derived from existing
frameworks for whole-program analysis. We discuss the
relationship between fragment analysis and whole-program
analysis, and the requirements ensuring fragment analysis
safety and feasibility.
• We suggest a coating of fragment psychoanalysis as a
second analysis phase after an inexpensive flow-
insensitive whole-program analysis, in order to receive
better information for important program fragments.
This approach can be used for plans that are too big to be
examined by flow-sensitive whole program analysis, yet
allow flow-insensitive whole-program analysis for
example, C programs with around 100,000 lines of code
[1, 18, 17].
• We depict the design of two fragment analyzes derived
from a whole program flow- and context-sensitive
pointer alias analysis [10] for C programs.
• We present empirical rating of the cost and precision of
these two fragment pointer alias analyses. We show that
the time and space costs of the analyses are practical. In
about 75% of our experiments, the better of the two
analyses results in a fourfold or higher precision
improvement over the whole-program flow-insensitive
solution.
The rest of the paper is formed as follows: Section 2 depicts
models for flow-sensitive whole-program analysis. Section
3 talks about frameworks for fragment analysis and the
issues demanded in their design. Section 4 depicts the
whole-program pointer alias analysis from [10], and Section
5 depicts the design of two fragment pointer alias analyses.
Empirical results are demonstrated in Section 6. Section 7
describes related work, and Section 8 presents our
conclusions.
2 Whole-Program Data-Flow Analysis
This section demonstrates two well-known models for
whole-program interprocedural flow-sensitive data-flow
analysis. Without loss of generality, we will only consider
analysis for forward data-flow problems [12]. Given a
whole program to be analyzed, a whole-program analysis
constructs a data-flow framework <G, L, F, M, η>, where:
• G = (N, E,ρ) is a aimed graph with node set N, edge
set E and starting node pЄN (for our purposes, G is
an interprocedural control flow graph).
• <L, ≤, ^> is a meet semi-lattice [12] with partial
order ≤ and meet Λ. For simplicity, we only
consider L which is finite1
and has a top element T.
• F ⊆{f j f : L ! Lg is a function space closed under
composition and arbitrary meets. We assume that F
is monotone [12].
• M : N →F is an assignment of transfer functions to
the nodes in G (without loss of generality, we
assume no edge transfer functions). The transfer
function for node n will be denoted by fn.
• ηЄL is the solution at the bottom of ρ.
The program is constituted by an interprocedural control
flow graph (ICFG) [10], which comprises control flow
graphs for all processes in the program. Each process has a
single entry node (node ρ is the entry node of the starting
procedure) and a single exit node. Each call statement is
constituted by a pair of nodes, a call node and a return node.
There is an edge from the call node to the entry node of the
called process; there is also an edge from the exit node of
the called process to the return node in the calling process.
A path from node n1 to node nk is a sequence of nodes p =
(n1,…,nk) such that (ni,; ni+1) Є E. Let fp = fn1 o fn2 o ... o
fnk. An accomplishable path is a path on which every
process returns to the call site which raised it [16, 10, and
13]; only such paths represent possible sequences of
execution steps. A same level realizable path is a
accomplishable path whose first and last nodes belong to the
same process, and on which the number of call nodes is
equal to the number of return nodes. Such paths represent
sequences of execution steps during which the call stack
may temporarily grow deeper, but never shallower that its
original depth, before eventually returning to its original
depth [13]. The set of all achievable paths from n to m will
be denoted by RP (n, m); the set of all same-level realizable
paths from n to m will be denoted by SLRP (n, m).
Definition 1. For each n Є N, the meet-over-all-realizable-
paths (MORP) solution at n is defined as MORP (n) = Λ
pЄRP(ρ,n) fp(η).
1 The results can be easily generalized for finite-height
semi-lattices.
Context-Insensitive Analysis. After constructing < G, L,
F,M,η>, a whole program analysis computes a solution S :
N →L; the data-flow solution at the bottom of node n will
be denoted by Sn. The solution is safe iff Sn ≤ MORP(n) for
each node n; a safe analysis computes a safe solution for
each valid input program. Traditionally, a system of
equations is constructed and then solved using fixed-point
iteration. In the simplest case, a context-insensitive analysis
constructs a system of equations of the form
Sp=η, Sn = Λ fn(Sm)
m2Pred(n)
Where Pred(n) is the set of predecessor nodes for n. The
initial solution has Sρ = η and Sn = T for any n ≠ ρ; the final
3. Int. J. Advanced Networking and Applications
Volume: 6 Issue: 3 Pages: 2319-2328 (2014) ISSN : 0975-0290
2321
solution is a fixed point of the system and is also a safe
estimate of the MORP solution.
Context-Sensitive Analysis. The problem with the above
approach is that data is disseminated from the exit of a
process to all of its callers. Context sensitive psychoanalysis
is used to deal this potential source of imprecision. One
approach is to disseminate factors of L collectively tags
which approximate the calling circumstance of the
procedure. At the exit of the process, these tags are looked
up in order to back-propagate data only to call sites at which
the corresponding calling context existed. The functional
approach" to context sensitivity [16] uses a new fretwork
that is the function space of functions mapping L to L.
Intuitively, if the solution at node n is a map hn : L!L, then
for each x2L we can use hn(x) to approximate fp(x) for each
path p2SLRP(e; n), where e is the entry node of the
procedure containing n. In other words, hn(x) estimates the
part of the solution at n that occurs under calling context x at
e. If calling context x never occurs at e, then hn(x) = >. The
solution of the original problem is received as V x2L hn(x).
In general, this approach requires compact representation of
functions and explicit functional compositions and meets,
which are usually infeasible. When L is finite, a feasible
version of the analysis can be designed [16]. Figure 1
presents a simplified description of this version. H[n,x]
contains the current value of hn(x); the worklist contains
pairs (n,x) for which H[n,x] has changed and has to be
propagated to the successors of n. If n is a call node, at line
7 the value of H[n,x] is propagated to the entry node of the
called procedure. If n is an exit node, the value of H[n,x] is
propagated only to return nodes at whose corresponding call
nodes x occurs (lines 8-10).
For distributive frameworks [12], this algorithm finishes
with the MORP solution; for non-distributive monotone
frameworks, it produces a safe approximation of the MORP
solution [16].When the lattice is the power set of some basic
finite set D of data-flow facts (e.g., the set all potential
aliases or the set of all variable definitions), the algorithm
can be modified to propagate elements of D instead of
elements of 2D. For distributive frameworks, this approach
produces a precise solution [13]. For non-distributive
monotone frameworks, restricting the context to a singleton
set necessarily introduces some approximation; the whole-
program pointer alias analysis from [10] falls in this
category.
input <G; L; F;M; _>; L is finite
output S: array[N] of L
declare H: array[N,L] of L; initial values >
W: list of (n,x), n2N, x2L; initially empty
[1] H[_,_] := _; W := f(_,_)g;
[2] while W 6= ; do [3] remove (n,x) from W; y:=H[n,x];
[4] if n is not a call node or an exit node then
[5] foreach m2Succ(n) do propagate(m,x,fm(y));
[6] if n is a call node then
[7] e := called entry(n); propagate(e,y,fe(y));
[8] if n is an exit node then
[9] foreach r2Succ(n) and l2L do
[10] if H[call node(r),l] = x then propagate(r,l,fr (y));
[11] foreach n2N do
[12] S[n] :=V l2L H[n,l];
[13] procedure propagate(n,x,y)
[14] H[n,x] := H[n,x] ^ y; if H[n,x] changed then add (n,x)
to W;
Fig. 1. Worklist implementation of context-sensitive whole-
program analysis
3 Fragment Data-Flow Analysis
This section depicts how interprocedural flow-sensitive
whole-program psychoanalysis can be changed to obtain
fragment data-flow analysis, which examines a program
fragment instead of a whole curriculum. The structure of
context-insensitive and context-sensitive fragment analysis
is discussed, as well as the issues involved in the design of
fragment analysis and the requirements ensuring its safety.
3.1 Fragment Analysis Structure
We assume that the analysis input admits a program
fragment F, which is an absolute set of processes. We
expect these procedures to be powerfully interrelated;
otherwise, the psychoanalysis may yield information that is
too imprecise. The input also contains whole-program
information I, which represents the knowledge available
about the programs to which F belongs. The whole-program
information depends on the particular software development
environment and the process in which fragment analysis is
used; the role of I is further discussed in Sect. 3.2. We will
use PI(F) to denote the set of all valid whole programs that
contain F and for which I is true; depending on I, PI (F) can
be anything from a singleton set (e.g., when the source code
of the whole program is available) to
an infinite set. Given F and I, a fragment analysis extracts
several kinds of information, shown in Table 1. Graph G0 =
(N0; E0) is the ICFG for the fragment and can be
constructed similarly to the whole-program case, except for
calls to procedures outside of F, which are not represented
by any edges in G0. Set BoundaryEntries contains every
entry node e2N0 which has a predecessor c =2N0 in some
program from PI (F). Similarly, BoundaryCalls contains
every call node c2N0 which has a successor e =2N0 in some
program from PI (F). Set Boundary Returns comprises the
return nodes corresponding to call nodes from Boundary
Calls.
4. Int. J. Advanced Networking and Applications
Volume: 6 Issue: 3 Pages: 2319-2328 (2014) ISSN : 0975-0290
2322
Table 1. Information extracted by a fragment analysis
Information Description
<G’ ; L’ ; F’ ;M’ ;η’ > Data-flow framework
BoundaryEntries Entry nodes of procedures
called from outside of F
BoundaryCalls Call nodes to procedures
outside of F
BoundaryReturns Return nodes from procedures
outside of F
βe :
BoundaryEntries L’
Summary information at
boundary entry nodes
βc : BoundaryCalls F Summary information at
boundary call nodes
3.2 Fragment Analysis Design
Designers of a specific fragment analysis have to address
several important problems. One problem is to decide what
kind of whole-program information I to use. The decision
depends on the software development environment, as well
as the process in which the fragment analysis is used. In this
paper, we are particularly interested in a process where an
inexpensive flow-insensitive whole-program
S0 n = V m2Pred(n) f 0 n(S 0 m) if n =2BoundaryEntries [
BoundaryReturns
S 0 n = _e(n) ^ V m2Pred(n) f 0 n(S 0 m)
ifn2BoundaryEntries
S0 n = f0 n(_c(m)(S0 m)) if n2BoundaryReturns and m =
call node(n)
S 0 _ = _ 0 if _2N
Fig. 2. Context-insensitive fragment analysis
input <G 0 ; L 0 ; F 0 ;M 0 ; _ 0 >; L 0 is _nite
output S: array[N 0 ] of L 0
declare H: array[N 0 ,L 0 ] of L 0 ; initial values >0
W: list of (n,x), n2N 0 , x2L 0 ; initially empty
[1a*] if _2N 0 then
[1b] H[_,_ 0 ] := _ 0 ; add (_,_ 0 ) to W;
[1c*] foreach n2BoundaryEntries do
[1d*] H[n,_e(n)] := _e(n); add (n,_e(n)) to W;
[2] while W 6= ; do
[3] remove (n,x) from W; y:=H[n,x];
[4] if n is not a call node or an exit node then
[5] foreach m2Succ(n) do propagate(m,x,f 0 m(y));
[6a*] if n is a call node and n =2BoundaryCalls then
[7a] e := called entry(n); propagate(e,y,f 0 e(y));
[6b*] if n is a call node and n2BoundaryCalls then
[7b*] r := ret node(n); propagate(r,x,f 0 r(_c(n)(y)));
[8] if n is an exit node then
[9] foreach r2Succ(n) and l 02L do
[10] if H[call node(r),l 0 ] = x then propagate(r,l 0 ,f 0 r(y));
[11] foreach n2N 0 do
[12] S[n] := V l02L0 H[n,l0];
Fig. 3. Worklist implementation of context-sensitive
fragment analysis
analysis is performed first, and then a more precise flow-
sensitive fragment analysis is used on fragments for which
better information is needed. This approach can be used for
programs that are too big to be analyzed by flow-sensitive
wholeprogram analysis, yet allow flow-insensitive whole-
program analysis |for example, C programs with around
100,000 lines of code [1, 18, 17]. In this scenario, the first
stage computes a whole-program flow-insensitive solution
and the program call graph. The second stage uses the call
graph and the flow-insensitive solution as its whole-
program information I. The two fragment analyses in Sect. 5
are designed in this manner.
Another problem is to construct the information described in
Table 1. Sets BoundaryCalls and BoundaryReturns can be
determined from F. Set BoundaryEntries by definition
depends on I. The summary information at boundary nodes
also depends on I. When the fragment analysis follows a
flow-insensitive whole-program analysis, I contains the call
graph of the program, from which the boundary entries can
be easily determined. In this case, fie and fic can be
extracted from the whole-program flow-insensitive solution,
as shown in Sect. 5.
The semi-lattice L0 depends mostly on F. However, it may
be also dependent on I | for example, when the fragment
analysis follows a flow-insensitive whole program analysis,
I contains a whole-program solution which may need to be
represented (possibly approximately) using L0. The
fragment analysis complexity is bounded by a function of
the size of L0; therefore, it is crucial to ensure that the size
of L0 depends only on the size of F and not the size of I.
This requirement guarantees that the fragment analysis will
be feasible for relatively small fragments of very large
programs. The fragment analyses from Sect. 5 illustrate this
situation.
3.3 Fragment Analysis Safety
The fragment analyses outlined above are similar in
structure to the whole program analyses from Sect. 2. In
fact, we are only interested in fragment analyses that are
derived from whole-program analyses. Consider a safe
whole-program analysis which we want to modify in order
to obtain a safe fragment analysis. The most important
problem is to define the relationship between the semi-
lattice L0 for a fragment F and the whole-program semi-
lattices Lp for the programs p2PI(F). For each such Lp, the
designers of the fragment analysis must define an
abstraction relation _p_Lp _ L0. This relation encodes the
notion of safety and is used to prove that the analysis is safe;
5. Int. J. Advanced Networking and Applications
Volume: 6 Issue: 3 Pages: 2319-2328 (2014) ISSN : 0975-0290
2323
it is never explicitly constructed or used during the analysis.
If (x; x0)2_p, we will write _p(x; x0)".
Intuitively, the abstraction relation _p defines the
relationship between the knowledge" represented by values
from Lp and the knowledge" represented by values from
L0. If _p(x; x0), the knowledge associated with x0 safely
abstracts" the knowledge associated with x; thus, _p is
similar in nature to the abstraction relations used for abstract
interpretation [19, 6]. The choice of _p depends both on the
original whole-program analysis and the intended clients of
the fragment analysis solution. Sect. 5 presents an example
of one such choice.
Definition 2. A solution produced by a fragment analysis
for an input pair (F; I) is safe if _p(MORPp(n); S0 n) for
each p2PI(F) and each n2N0, where S0 n is the fragment
analysis solution at n and MORPp(n) is the MORP solution
at n in p. A safe fragment analysis yields a safe solution for
each valid input pair (F; I).
With this definition in mind, we present a set of sufficient
requirements that ensures the safety of the fragment
analysis. Intuitively, the first requirement ensures that a safe
approximation in L0, as defined by the partial order _, is
also a safe abstraction according to _p. The second
requirement ensures that if x0 2L0 safely abstracts the
effects of each realizable path ending at a given node
n, it can be used to safely abstract the MORP solution at n.
Formally, for any p2PI (F) and its _p, any x; y 2 Lp and any
x0; y0 2 L0, the following must be true:
Property 1: if _p(x; x0) and y0 _ x0, then _p(x; y0)
Property 2: if _p(x; x0) and _p(y; x0), then _p(x ^ y; x0)
The next requirement ensures that the transfer functions in
the fragment analysis safely abstract the transfer functions
in the whole-program analysis. Let fn;p be the transfer
function for n 2N0 in the whole-program analysis for any
p2PI(F). For any n2N0, x2Lp and x02L0, the following must
be true:
Property 3: if _p(x; x0), then _p(fn;p(x); f0 n(x0))
If F contains the starting procedure of the program, then _ 2
N0 and the fragment analysis solution at the bottom of _ is
_0. Let _p be the whole-program solution at the bottom of _
for any p2PI(F). Then the following must be true:
Property 4: _p(_p; _0)
The next requirement ensures that for each boundary entry
node e, the summary value _e(e) safely abstracts the effects
of each realizable path that reaches e from outside of F. For
any p2PI(F), let RPout p (_; e) be the set of all realizable
paths q = (_; : : :; c; e) in p such that c =2N0; recall that fq
is the composition of the transfer functions for the nodes in
q. Then the following must be true:
Property 5: 8q2RPout p (_; e) : _p(fq(_p); _e(e))
The last requirement ensures that the summary function
_c(c) for each boundary call node c safely abstracts the
effects of each same-level realizable path from the entry to
the exit of the called procedure. Consider any p2PI(F) in
which c has a successor entry node e =2N0 and let t be the
exit node corresponding to e. Let SLRPp(e; t) be the set of
all same-level realizable paths in p from e to t. Intuitively,
each realizable path ending at t has a suffix which is a same-
level realizable path; thus, _c(c) should safely abstract each
path from SLRPp(e; t). Formally, for any x2Lp and x02L0,
the following must be true:
Property 6: 8q2SLRPp(e; t): if _p(x; x0), then _p(fq(x);
_c(c)(x0))
If the above requirements are satisfied, the context-
insensitive fragment analysis derived from a safe context-
insensitive whole-program analysis is safe, according to
Definition 2. The proof considers a fixed-point solution of
the system in Fig. 2. It can be shown that for each p 2 PI
(F), each n 2 N0 and each realizable path q = (_p; : : :; n), it
is true that _p(fq(_p); S0 n). Each such q is the
concatenation of two realizable paths r0 and r00. Path r0
starts from _p and lies entirely outside of F; if _p2N0, r0 is
empty. Path r00 = (e; : : :; n) has e2N0 and n2N0 and is the
fragment suffix of q; it may enter and leave F arbitrarily.
The proof is by induction on the length of the fragment
suffix of q.
Similarly, the context-sensitive fragment analysis derived
from a safe context sensitive whole-program analysis is
safe. It is enough to show that for each p2PI (F), each n 2N0
and each realizable path q = (_p; : : :; n), there exists a
value l0 2L0 such that _p(fq(_p);H[n; l0]). Again, this can
be proven by induction on the length of the fragment suffix
of q.
4 Whole-Program Pointer Alias Analysis
This section presents a simplified high-level description of
the Landi-Ryder whole-program pointer alias analysis [10]
for the C programming language. The analysis considers a
set of names that can be described by the grammar in Fig. 4.
Each of the names corresponds to one or more run-time
memory locations. An alias pair (or simply an alias) is a
pair of names that potentially represent the same memory
location.
<Name> ::= <SimpleName> j <Deref> j <ArrayElem>
| <FieldAccess> j <K-Limited>
<SimpleName> ::= identifier /* variable name */
| heapn /* heap location */
6. Int. J. Advanced Networking and Applications
Volume: 6 Issue: 3 Pages: 2319-2328 (2014) ISSN : 0975-0290
2324
<Deref> ::= *(<Name>+?) /* dereference */
<ArrayElem> ::= <Name>[?] /* array element */
<Fld> ::= identifier /* field name */
<FieldAccess> ::= <Name>.<Fld> /* field of a structure */
<K-Limited> ::= <Name># /* k-limited name */
Fig. 4. Grammar for names
If a program point n is a call to malloc or a similar function,
name heapn represents the set of all heap memory locations
created at this point during execution. If there are recursive
data structures (e.g., linked lists), the number of names is
potentially infinite. The analysis limits the number of
dereferences in a name to a given constant k; any name with
more that k dereferences is represented by a k-limited name.
For example, for k = 1, name (_((_p):f)):f is represented by
the k-limited name ((_p):f)#.
A simple name is a name generated by the SimpleName
nonterminal in the grammar. The root name of a name n is
the simple name used when n is generated by the grammar;
for example, the root name of (_p):f is p. Name n is a fixed
location if it does not contain any dereferences. Name (_p):f
is not a fixed location, while s[?]:f is a fixed location.
The lattice of the analysis is the power set of the set of all
pairs of names; the meet operator is set union and the partial
order is the is-superset-of" relation. The analysis is flow-
and context-sensitive, and conceptually follows the Sharir-
Pnueli algorithm described in Sect. 2. However, since the
lattice is a power set, the algorithm can be modified to
propagate single aliases instead of sets of aliases | the
worklist contains triples (n,RA,PA), where n is a node, RA is
a reaching alias that is part of the solution at the entry of the
procedure to which n belongs, and PA is a possible alias at
the bottom of n. Restricting the reaching alias set to a single
alias introduces some approximation; therefore, the actual
Landi-Ryder algorithm can be viewed as an approximation
algorithm solving the more precise Sharir-Pnueli
formulation of the problem.
The fragment analyses in Sect. 5 are derived from a
modified version of the Landi-Ryder analysis. This version
considers only aliases containing a fixed location; aliases
with two non-fixed locations (e.g., an alias between (_p):f
and _q) are ignored during propagation. It can be shown that
this is safe, as long as the program does not contain
assignments in the form *p=&x; if such assignments
exist, they can be removed by introducing intermediate
temporary variables | for example, t=&x; *p=t. An
assignment whose left-hand side is a non-fixed location (i.e.,
through-deref assignment) potentially modifies many fixed
locations. A safe solution for the modified analysis is
sufficient to estimate all fixed locations possibly modified
by through-deref assignments. We refer to this process as
resolution of through-deref assignments.
5 Fragment Pointer Alias Analyses
This section describes two fragment pointer alias analyses |
basic analysis A1 and extended analysis A2 | derived from
the modified whole-program analysis in Sect. 4. They are
used for resolution of through-deref assignments. Their
design is based on a process in which flow-insensitive
whole-program pointer alias analysis is performed _rst, and
then more precise flow-sensitive fragment pointer alias
analysis is used on fragments for which better information is
needed. As described in Sect. 3.2, in this case the whole-
program information I contains a whole-program flow-
insensitive solution and the program call graph.
Analysis Input. The flow-insensitive solution SFI is obtained
using a wholeprogram flow- and context-insensitive
analysis similar to the one in [21]. Intuitively, the names in
the program are partitioned into equivalence classes; if two
names can be aliases, they belong to the same class in the
final solution. Every name starts in its own equivalence
class and then classes are joined as possible aliases are
discovered. For example, statement p=&x causes the
equivalence classes of _p and x to merge. Overall, the
analysis is similar to other flow- and context-insensitive
analyses with almost-linear cost [17, 15]. Then SFI is used
to resolve calls through function pointers and to produce a
safe approximation of the program call graph. The sets of
boundary entry, call and return nodes can be easily
determined from this graph.
The basic analysis A1 takes as input SFI and the program
call graph, as well as the source code for the analyzed
fragment F. The extended analysis A2 requires as additional
input the source code for all procedures directly or
indirectly called by procedures in F. These additional
procedures together with the procedures from F form the
extended fragment Fext . Including all transitively called
procedures allows A2 to estimate better the effects of calls
to procedures outside of F; the tradeoffs of this approach are
further discussed in Sect. 8.
Analysis Lattices. The whole-program lattice is based on the
set of names generated by the grammar in Fig. 4. Each such
name can be classified as either relevant or irrelevant. A
relevant name has a root name with a syntactic occurrence
in the fragment; all other names are irrelevant. Since the
fragment analysis solution is used to resolve through-deref
assignments, only aliases that contain a relevant non-fixed
location are useful. If the new lattice L0 contains all such
aliases, its size still potentially depends on the number of
fixed locations in the whole program. This problem can be
solved by using special placeholder variables to represent
sets of related irrelevant fixed locations; this approach is
similar to the use of representative data-flow values in other
analyses [11, 10, 7, 20, 4].
Consider an equivalence class Ei 2 SFI that contains at least
one relevant name. If Ei contains irrelevant fixed locations,
7. Int. J. Advanced Networking and Applications
Volume: 6 Issue: 3 Pages: 2319-2328 (2014) ISSN : 0975-0290
2325
a placeholder variable phi is used to represent them. The
aliases from L0 fall in two categories: (1) a pair of relevant
names, exactly one of which is a fixed location, and (2) a
relevant non-fixed location and a placeholder variable.
Clearly, the number of relevant names and the number of
placeholder variables depend only on the size of the
fragment; thus, the size of L0 is independent of the size of
the whole program.
In the final solution, an alias with two relevant names
represents itself, while an alias with a placeholder phi
represents a set of aliases, one for each irrelevant fixed
location represented by phi. This can be formalized by
defining an abstraction relation _p _Lp _ L0 for each p2PI
(F). For each pair of sets S 2 Lp and S0 2L0, we have _p(S;
S0) i_ each alias from S is safely abstracted by S0. Let x be
the non-fixed location in alias (x,y)2S; then _p is defined as
follows:
- If x is an irrelevant name, then (x,y) is safely abstracted by
S0
- If x and y are relevant and (x,y)2S0, then (x,y) is safely
abstracted by S0
- If x is relevant and y is irrelevant, Ei is y's equivalence
class, and (x,phi)2S0,
then (x,y) is safely abstracted by S0
Intuitively, the above definition says that aliases involving
an irrelevant non- fixed location can be safely ignored. It
also says that irrelevant fixed locations from the same
equivalence class have equivalent behavior and need not be
considered individually. The safety implications of this
definition are discussed later. Note that if a fragment alias
solution is safe according to the above definition, it can be
used to safely resolve through-deref assignments.
Summaries at Boundary Nodes. Based on SFI , A1 and A2
construct a set of aliases _2L0. Each equivalence class Ei is
examined and for each pair of names (x,y) of a non-fixed
location x and a fixed location y from the class, the
following is done: first, if x is an irrelevant name, the pair is
ignored. Otherwise, if y is a relevant name, (x,y) is added to
_. Otherwise, if y is an irrelevant name, (x,phi) is added to _.
Clearly, (x,y) is safely abstracted by _ according to the
definition of _p given above.
The basic analysis A1 defines _e(e) = _ for each boundary
entry node e and and _c(c)(x) = _ for each x 2 L0 and each
boundary call node c. This essentially means that SFI is
used to approximate the solutions at boundary entry nodes
and boundary return nodes. For the extended analysis A2,
there are no boundary call nodes. For each boundary entry
node e that belongs to the original fragment F, the analysis
defines _e(e) = _. If e is part of the extended fragment Fext,
but is not part of the original fragment F, the analysis
defines _e(e) = θ.
Analysis Safety. To show the safety of A1, it is enough to
show that the requirements from Sect. 3.3 are satisfied.
Properties 1 and 2 are trivially satisfied. Since both _p and
_0 are the empty set, Property 4 is also true.
Showing that Property 3 is true requires careful examination
of the transfer functions from [10]. The formal proof is
based on two key observations. The first one is that aliases
involving an irrelevant non-fixed location are only
propagated through the nodes in the fragment, without
actually creating any new aliases; therefore, they can be
safely ignored. The second observation is that aliases with
the same no fixed location and with different irrelevant
fixed locations have equivalent behavior. For example, alias
(_p,x) at the top of statement q=p; results in (_p,x) and
(_q,x) at the bottom of the statement. Similarly, (_p,y)
results in (_p,y) and (_q,y). If both x and y are represented
by a placeholder phi, alias (_p,phi) results in (_p,phi) and
(_q,phi), which satisfies Property 3.
The set _ described above is extracted from the whole-
programflow-insensitive alias solution, which is safe.
Therefore, _ safely abstracts any alias that could be true at a
node in the program; thus, Properties 5 and 6 are true. Since
all requirements are satisfied, A1 is safe. Similarly to the
whole-program case, the actual implementation propagates
single aliases instead of sets of aliases. As a result, the
reaching alias set is restricted to a single alias and therefore
some approximation is introduced. Of course, the resulting
solution is still safe.
For the extended analysis A2, it is not true that the solution
is safe at each node in the extended fragment. For example,
consider the entry node of a procedure that was not in the
original fragment F. If this procedure is called from outside
of the extended fragment Fext , aliases could be propagated
along this call edge during the whole-program analysis, but
would be missing in the fragment analysis. However, it can
be proven that for each node in the original fragment F, the
solution is safe. The proof is very similar to the one outlined
in Sect. 3.3, and still requires that Properties 1 through 4 are
true. For each realizable path q starting at _ and ending at a
node in F, its fragment suffix is the subpath starting at the
first node in q that belongs to F. The proof is by induction
on the length of the fragment suffix; the base case of the
induction depends on the fact that _e(e) = _ for boundary
entry nodes in F, and therefore the solution at such nodes is
safe.
6 Empirical Results
Our implementation uses the PROLANGS Analysis
Framework2 (version 1.0), which incorporates the Edison
Design Group front end for C/C++. The results were
gathered on a Sun Sparc-20 with 352 MB of memory. The
implementation analyzes a reduced version of C that
8. Int. J. Advanced Networking and Applications
Volume: 6 Issue: 3 Pages: 2319-2328 (2014) ISSN : 0975-0290
2326
excludes signals, setjmp, and longjmp, but allows function
pointers, type casting and union types.
Table 2 describes the C programs used in our experiments.
It shows the program size in lines of code and number of
ICFG nodes, the number of procedures, the total number of
assignments, and the number of through-deref assignments.
Table 2. Analyzed Programs
For each of the data programs, we extracted by hand subsets
of related procedures that formed a cohesive fragment.
Significant effort was put in determining realistic fragments
| the program source code was thoroughly examined, the
program call graph was used to obtain better understanding
of the calling relationships in the program, and the
documentation (both external and inside the source code)
was carefully analyzed. For each program, two fragments
were extracted. For example, for zip one of the fragments
consisted of all procedures implementing the implosion
algorithm. For espresso, one of the fragments consisted of
the procedures for reducing the cubes of a boolean function.
For sc, one of the fragments consisted of the procedures
used to evaluate expressions in a spreadsheet. Some
characteristics of the fragments are given in Table 3.
For each fragment, three experiments were performed; the
results are shown in Table 4. In the first experiment, the
solution from the whole-program flowinsensitive (FI)
analysis was used at each through-deref assignment to
determine the number of simple names possibly modified by
the assignment. For a fixed location that was not a simple
name, a modification of its root simple name was counted
(e.g., a modification of s:f was counted as a modification of
s). Then
Table 3. Analyzed Fragments
Table 4. Precision Comparison
Fragmen
t
WholePrg
m FI
Basic
FFS
Perce
nt
Extende
d FFS
Perce
nt
zip.1 15.78 12.98 82.3 12.98 82.3
zip.2 3.88 1.02 26.3 1.02 26.3
sc.1 9.50 9.00 94.7 1.00 10.5
sc.2 13.14 3.29 25.0 3.29 25.0
larn.1 17.00 14.57 85.7 1.00 5.9
larn.2 9.00 9.00 100.0 1.00 11.1
espresso.
1
107.10 107.1
0
100.0 34.00 31.7
espresso.
2
57.55 57.55 100.0 35.11 61.0
tsl.1 41.87 17.07 40.8 1.00 2.4
tsl.2 41.88 24.82 59.3 1.00 2.4
moria.1 132.80 1.46 1.1 1.46 1.1
moria.2 31.04 3.55 11.4 3.55 11.4
the average across all through-deref assignments in the
fragment was taken. In the second experiment, the FI
analysis was followed by the basic fragment flowsensitive
(FFS) analysis. Again, for each through-deref assignment
the number of simple names possibly modified was
determined; placeholder variables were expanded to
determine the actual simple names modified. Then the
average across all through-deref assignments was taken.
Each of these averages is shown as an absolute value and as
a percent of the FI average. The third experiment used the
extended fragment analysis instead of the basic fragment
analysis.
Overall, the results show that the precision of the extended
analysis is very good. In particular, for seven fragments it
produces averages very close to 1, which is the lower
bound. The averages are bigger than 4 for only three
fragments; all three take as input a pointer to an external
data structure, and a large number of the through-deref
assignments in the fragment are through this pointer. The
pointer itself is not modified, and each modification through
it resolves to the same number of simple names as in the FI
solution.
Fragment ICFG
Node
s
%
Tota
l
Proc
s
Boundary Assignments
Entrie
s
Call
s
All Dere
f
zip.1 1351 21.9 28 5 17 776 59
zip.2 429 7.0 9 5 19 255 50
sc.1 1238 18.5 30 3 30 609 8
sc.2 793 11.9 13 3 10 459 23
larn.1 345 2.9 4 4 35 188 46
larn.2 420 3.5 11 9 44 216 4
espresso.
1
440 2.9 6 2 40 234 38
espresso.
2
963 6.3 19 5 113 461 53
tsl.1 355 2.3 13 4 33 175 15
tsl.2 1004 6.5 29 7 134 459 17
moria.1 2678 13.2 43 14 348 163
4
392
moria.2 1221 6.0 27 7 149 644 49
Pro
gra
m
L
O
C
IC
FG
No
de
s
Pr
oc
s
Assign
ments
Pro
gra
m
L
O
C
IC
FG
No
de
s
Pr
oc
s
Assignm
ents
Al
l
D
er
ef
Al
l
D
er
ef
zip 81
77
61
72
12
2
34
43
58
2
espr
esso
14
91
0
15
33
9
37
2
78
22
13
22
sc 85
30
66
78
16
0
34
40
15
9
tsl 16
05
3
15
46
9
47
1
72
49
50
7
larn 10
01
4
12
06
3
29
8
60
21
38
9
mor
ia
25
29
2
20
21
3
45
8
10
55
7
13
58
9. Int. J. Advanced Networking and Applications
Volume: 6 Issue: 3 Pages: 2319-2328 (2014) ISSN : 0975-0290
2327
The performance of the basic analysis is less satisfactory.
For five fragments, it achieves the same precision as the
extended analysis. For the remaining fragments, in five
cases the solution is close to the FI solution. The main
reason is that precision gains from flow-sensitivity are lost
at calls to procedures outside of the fragment.
Table 5 shows the running times of the analyses in minutes
and seconds. The last two columns give the used space in
kilobytes. The results show that the cost of the fragment
analyses is acceptable, in terms of both space and time.
7 Related Work
In Harrold and Rothermel's separate pointer alias analysis
[8], a software module is analyzed separately and later
linked with other modules. The analysis is based
Table 5. Analysis Time and Space
Fragment WholePr
gm FI
Basi
c
FFS
Extend
ed FFS
Basic
Space
Extende
d Space
zip.1 0:02 1:01 1:35 2544 3784
zip.2 0:02 0:18 0:18 2064 2064
sc.1 0:02 0:18 0:18 2504 2504
sc.2 0:02 0:25 0:33 2664 2824
larn.1 0:03 0:22 0:27 3760 6560
larn.2 0:03 0:20 0:28 3680 6556
espresso.1 0:07 0:52 1:58 5504 17400
espresso.2 0:07 0:56 3:07 5504 26904
tsl.1 0:08 0:33 0:43 5776 8672
tsl.2 0:08 1:15 1:25 12480 18648
moria.1 0:09 1:22 1:29 9504 10464
moria.2 0:09 1:39 1:37 7608 17440
on the whole-program analysis from [10]; it simulates the
aliasing effects that are possible under all calling contexts
for the module. Placeholder variables are used to represent
sets of variables that are not explicitly referenced in the
module, similarly to the placeholder variables in our
fragment pointer alias analyses. Aliases are assigned extra
tags describing the module calling context.
There are several differences between our work and [8].
First, we have designed a general framework for fragment
analysis and emphasized the importance of the theoretical
requirements that ensure analysis safety and feasibility.
Second, the intended application of our fragment pointer
alias analyses is to improve the information about a part of
the program after an inexpensive whole-program analysis;
the application in [8] is separate analysis of single-entry
modules. Finally, [8] does not present empirical evaluation
of the performance of the analysis; we believe that their
approach may have scalability problems.
Reference [11] presents an analysis that decomposes the
program into regions in which several local problems are
solved. Representative values are used for actual data-flow
information that is external to the region; our placeholder
variables are similar to these representative values. Other
similar mechanisms are the non-visible names from [10, 7],
extended parameters from [20], and unknown initial values
from [4].We use an abstraction relation to capture the
correspondence between the representative and the actual
data-flow information; this is similar to the use of
abstraction relations in the field of abstract interpretation
[19, 6].
The work in [3] also addresses the analysis of program
fragments and uses the notion of representative data-flow
information for external data-flow values. However, in [3]
the specific fragments are libraries with no boundary calls,
the analysis computes def-use associations in object-
oriented languages with exceptions, and there is no
assumption of available whole-program information.
Cardelli [2] considers separate type checking and
compilation of program fragments. He proposes a
theoretical framework in which program fragments are
separately compiled in the context of some information
about missing fragments, and later can be safely linked
together.
Model checking is a technique for verifying properties of
finite-state systems; the desired properties are specified
using temporal-logic formulae. Modular model checking
verifies properties of system modules, under some
assumptions about the environment with which the module
interacts [9]. These assumptions play an analogous role to
that of the whole-program information in fragment
analysis. Further discussion of the relationship between
data-flow analysis and model checking is given in [14].
There is some similarity in the problem addressed by our
work and that in [5], which presents an analysis of modular
logic programs for which a compositional semantics is
defined. Each module can be analyzed using an abstract
interpretation of the semantics. The analysis results for
separate modules can be composed to yield results for the
whole program, or alternatively, the results for one module
can be used during the analysis of another module.
8 Conclusions
This paper is a first step in investigating the theory and
practice of fragment data-flow analysis. It proposes
fragment analysis as an alternative to traditional whole-
program analysis. The theoretical issues involved in the
design of safe and feasible flow-sensitive fragment analysis
are discussed. One possible application of fragment analysis
is to be used after a whole-program flow-insensitive
10. Int. J. Advanced Networking and Applications
Volume: 6 Issue: 3 Pages: 2319-2328 (2014) ISSN : 0975-0290
2328
analysis in order to improve the precision for an interesting
portion of the program. This paper presents one such
example, in which better information about modifications
through pointer dereference is obtained by performing flow-
and context-sensitive fragment pointer alias analysis. The
design of two such analyses is described, and empirical
results evaluating their cost and precision are presented.
The empirical results show that the extended analysis
presented in Sect. 5 can achieve significant precision
benefits at a practical cost. The performance of the basic
analysis is less satisfactory, even though in about half of the
cases it achieves the precision of the extended analysis.
Clearly, in some cases the whole-program flow-insensitive
solution is not a precise enough estimate of the
effects of calls to external procedures. The extended
analysis solves this problem for the fragments used in our
experiments. We expect this approach to work well in cases
when the extended fragment is not much bigger than the
original fragment. One typical example would be a fragment
with calls only to relatively small external procedures which
provide simple services; in our experience, this is a common
situation. However, in some cases the extended fragment
may contain a prohibitively large part of the program;
furthermore, the source code for some procedures may not
be available. We are currently investigating different
solutions to these problems.
Acknowledgments. We would like to thank Thomas
Marlowe, Matthew Arnold, and the anonymous reviewers
for their valuable comments.
References
[1]. D. Atkinson and W. Griswold. Effective whole-
program analysis in the presence of pointers. In Proc.
Symp. on the Foundations of Software Engineering,
pages 46{55, 1998.
[2]. L. Cardelli. Program fragments, linking and
modularization. In Proc. Symp. On Principles of Prog.
Lang., pages 266{277, 1997.
[3]. R. Chatterjee and B. G. Ryder. Data-flow-based
testing of object-oriented libraries. Technical Report
DCS-TR-382, Rutgers University, 1999.
[4]. R. Chatterjee, B. G. Ryder, and W. Landi. Relevant
context inference. In Proc. Symp. on Principles of
Prog. Lang., pages 133{146, 1999.
[5]. M. Codish, S. Debray, and R. Giacobazzi.
Compositional analysis of modular logic programs. In
Proc. Symp. on Principles of Prog. Lang., pages
451{464, 1993.
[6]. P. Cousot and R. Cousot. Abstract interpretation: A
unified lattice model for static analysis of programs by
construction or approximation of fixed points. In
[7]. Proc. Symp. on Principles of Prog. Lang., pages
238{252, 1977.
[8]. M. Emami, R. Ghiya, and L. J. Hendren. Context-
sensitive interprocedural pointsto analysis in the
presence of function pointers. In Proc. Conf. on Prog.
Lang. Design and Implementation, pages 242{257,
1994.
[9]. M. J. Harrold and G. Rothermel. Separate computation
of alias information for reuse. IEEE Transactions on
Software Engineering, 22(7):442{460, July 1996.
[10]. O. Kupferman and M. Y. Vardi. Modular model
checking. In Proc. Symp. on Compositionality, LNCS
1536, pages 381{401, 1997.
[11]. W. Landi and B. G. Ryder. A safe approximation
algorithm for interprocedural pointer aliasing. In Proc.
Conf. on Prog. Lang. Design and Implementation,
pages 235{248, 1992.
[12]. T. Marlowe and B. G. Ryder. An efficient hybrid
algorithm for incremental data flow analysis. In Proc.
Symp. on Principles of Prog. Lang., pages 184{196,
1990.
[13]. T. Marlowe and B. G. Ryder. Properties of data flow
frameworks: A unified model. Acta Informatica,
28:121{163, 1990.
[14]. T. Reps, S. Horwitz, and M. Sagiv. Precise
interprocedural dataflow analysis via graph
reachability. In Proc. Symp. on Principles of Prog.
Lang., pages 49{61, 1995.
[15]. D. Schmidt. Data flow analysis is model checking of
abstract interpretations. In Proc. Symp. on Principles
of Prog. Lang., pages 38{48, 1998.
[16]. M. Shapiro and S. Horwitz. Fast and accurate flow-
insensitive points-to analysis. In Proc. Symp. on
Principles of Prog. Lang., pages 1{14, 1997.
[17]. M. Sharir and A. Pnueli. Two approaches to
interprocedural data flow analysis. In S. Muchnick and
N. Jones, editors, Program Flow Analysis: Theory and
[18]. Applications, pages 189{234. Prentice Hall, 1981.
[19]. B. Steensgaard. Points-to analysis in almost linear
time. In Proc. Symp. on Principlesof Prog. Lang.,
pages 32{41, 1996.
[20]. P. Stocks, B.G. Ryder, W. Landi, and S. Zhang.
Comparing flow- and contextsensitivity on the
modification side-effects problem. In Proc.
International Symposium
[21]. on Software Testing and Analysis, pages 21{31, 1998.
[22]. R. Wilhelm and D. Maurer. Compiler Design.
Addison-Wesley, 1995.
[23]. R. Wilson and M. Lam. Efficient context-sensitive
pointer analysis for C programs. In Proc. Conf. on
Prog. Lang. Design and Implementation, pages 1{12,
1995.
[24]. S. Zhang, B. G. Ryder, and W. Landi. Program
decomposition for pointer aliasing:
[25]. A step towards practical analyses. In Proc. Symp. on
the Foundations of Software Engineering, pages
81{92, 1996.