Program Slicing is a method for automatically decomposing programs by analyzing their data flow and control flow.
Slicing reduces the program to a minimal form called “slice” which still produces that behavior. Program slice singles out all
statements that may have affected the value of a given variable at a specific program point. Slicing is useful in program
debugging, program maintenance and other applications that involve understanding program behavior. In this paper we have
discuss the static and dynamic slicing and its comparison by taking number of examples
Program slicing technique is used for decomposition of a program by analyzing that particular program data
and control flow. The main application of program slicing includes various software engineering activities such as
program debugging, understanding, program maintenance, and testing and complexity measurement. When a slicing
technique gathers information about the data and control flow of the program taking an actual and specific execution
(or set of executions) of it, then it is said to be dynamic slicing, otherwise it is said to be static slicing. Generally,
dynamic slices are smaller than static because the statements of the program that affect by the slicing criterion for a
particular execution are contained by dynamic slicing. This paper reports a new approach of program slicing that is a
mixed approach of static and dynamic slice (S-D slicing) using Object Oriented Concepts in C++ Language that will
reduce the complexity of the program and simplify the program for various software engineering applications like
program debubbing.
Program slicing technique is used for decomposition of a program by analyzing that particular program data
and control flow. The main application of program slicing includes various software engineering activities such as
program debugging, understanding, program maintenance, and testing and complexity measurement. When a slicing
technique gathers information about the data and control flow of the program taking an actual and specific execution
(or set of executions) of it, then it is said to be dynamic slicing, otherwise it is said to be static slicing. Generally,
dynamic slices are smaller than static because the statements of the program that affect by the slicing criterion for a
particular execution are contained by dynamic slicing. This paper reports a new approach of program slicing that is a
mixed approach of static and dynamic slice (S-D slicing) using Object Oriented Concepts in C++ Language that will
reduce the complexity of the program and simplify the program for various software engineering applications like
program debubbing.
Program Slicing is a technique used by programmers or testers
in order to debug, understand or analyze a coded program. In
this paper, I summarize three different types of program slicing
explaining concepts, slicing algorithms, and giving some examples
based on three different published papers. Those different types of
slicing are: Static slicing, Thin Slicing, and Dynamic Slicing.
After that, I compare all the three approaches of slicing to each
other in terms of accuracy of finding the buggy statements and
which type is more efficient and some other aspects.
Keywords: Static Slicing - Thin Slicing - Dynamic Slicing -
Debugging - Slicing algorithm.
Software of Time Series Forecasting based on Combinations of Fuzzy and Statis...ITIIIndustries
The developed software is a web application with open access and is aimed on forecasting of time series stored in database. We proposed approach of time series forecasting, combined ARIMA models with fuzzy techniques: three fuzzy time series models, fuzzy transformation (F-transform) and ACL-scale. Applications of a proposed web service have demonstrated efficiency in practical time series predictions with suitable accuracy.
An operator is a symbol that tells the compiler to perform specific mathematical or logical manipulations.
C language supports a rich set of built-in operators. An operator is a symbol that tells the compiler to perform certain mathematical or logical manipulations. Operators are used in program to manipulate data and variables.
Functions are the building blocks where every program activity occurs. They are self-contained program segments that carry out some specific, well-defined task. Every C program must have a function c functions list. c functions multiple choice questions
The ppt describes usage of functions in c language. Showing basic use of function and determining the differences between function call by value and function call by reference using pointer. It also includes valid use in swapping two numbers in c along with different outputs. Overall its a basic note for c language.
Laura F. Cacovean, Florin Stoica, Dana Simian, A New Model Checking Tool, Proceedings of the 5th European Computing Conference (ECC ’11), Paris, France, pp. 358-363, April 28-30, 2011
Parameter Estimation of Software Reliability Growth Models Using Simulated An...Editor IJCATR
The parameter estimation of Goel’s Okomotu Model is performed victimisation simulated annealing. The Goel’s Okomotu
Model is predicated on Exponential model and could be a easy non-homogeneous Poisson method (NHPP) model. Simulated
annealing could be a heuristic optimisation technique that provides a method to flee local optima. The information set is optimized
using simulated annealing technique. SA could be a random algorithmic program with higher performance than Genetic algorithmic
program (GA) that depends on the specification of the neighbourhood structure of a state area and parameter settings for its cooling
schedule.
How is Mortgage Lending Influencing the Economic Growth in AlbaniaEditor IJCATR
Banks in Albania have been playing an important role in providing credit to the households especially to the
housing sector and thereby contributing to the aggregate demand in the sector. Moreover, Albanian banks also extend
various types of loans against the individuals and corporate before the financial crisis. After the crisis the banks become
more restricted due to increase of non-performing loans as well as the macroeconomic volatility which is higher in
emerging economies like Albania. In my study I will focus on the influence of mortgage loans and nonperforming loans
in the country economic growth during the interval 2008-2013 that corresponds with the after crisis period.
Program Slicing is a technique used by programmers or testers
in order to debug, understand or analyze a coded program. In
this paper, I summarize three different types of program slicing
explaining concepts, slicing algorithms, and giving some examples
based on three different published papers. Those different types of
slicing are: Static slicing, Thin Slicing, and Dynamic Slicing.
After that, I compare all the three approaches of slicing to each
other in terms of accuracy of finding the buggy statements and
which type is more efficient and some other aspects.
Keywords: Static Slicing - Thin Slicing - Dynamic Slicing -
Debugging - Slicing algorithm.
Software of Time Series Forecasting based on Combinations of Fuzzy and Statis...ITIIIndustries
The developed software is a web application with open access and is aimed on forecasting of time series stored in database. We proposed approach of time series forecasting, combined ARIMA models with fuzzy techniques: three fuzzy time series models, fuzzy transformation (F-transform) and ACL-scale. Applications of a proposed web service have demonstrated efficiency in practical time series predictions with suitable accuracy.
An operator is a symbol that tells the compiler to perform specific mathematical or logical manipulations.
C language supports a rich set of built-in operators. An operator is a symbol that tells the compiler to perform certain mathematical or logical manipulations. Operators are used in program to manipulate data and variables.
Functions are the building blocks where every program activity occurs. They are self-contained program segments that carry out some specific, well-defined task. Every C program must have a function c functions list. c functions multiple choice questions
The ppt describes usage of functions in c language. Showing basic use of function and determining the differences between function call by value and function call by reference using pointer. It also includes valid use in swapping two numbers in c along with different outputs. Overall its a basic note for c language.
Laura F. Cacovean, Florin Stoica, Dana Simian, A New Model Checking Tool, Proceedings of the 5th European Computing Conference (ECC ’11), Paris, France, pp. 358-363, April 28-30, 2011
Parameter Estimation of Software Reliability Growth Models Using Simulated An...Editor IJCATR
The parameter estimation of Goel’s Okomotu Model is performed victimisation simulated annealing. The Goel’s Okomotu
Model is predicated on Exponential model and could be a easy non-homogeneous Poisson method (NHPP) model. Simulated
annealing could be a heuristic optimisation technique that provides a method to flee local optima. The information set is optimized
using simulated annealing technique. SA could be a random algorithmic program with higher performance than Genetic algorithmic
program (GA) that depends on the specification of the neighbourhood structure of a state area and parameter settings for its cooling
schedule.
How is Mortgage Lending Influencing the Economic Growth in AlbaniaEditor IJCATR
Banks in Albania have been playing an important role in providing credit to the households especially to the
housing sector and thereby contributing to the aggregate demand in the sector. Moreover, Albanian banks also extend
various types of loans against the individuals and corporate before the financial crisis. After the crisis the banks become
more restricted due to increase of non-performing loans as well as the macroeconomic volatility which is higher in
emerging economies like Albania. In my study I will focus on the influence of mortgage loans and nonperforming loans
in the country economic growth during the interval 2008-2013 that corresponds with the after crisis period.
Steganography using Interpolation and LSB with Cryptography on Video Images-A...Editor IJCATR
Stegnography is the most common term used in the IT industry, which specifically means, "covered writing" and is derived
from the Greek language. Stegnography is defined as the art and science of invisible communication i.e. it hides the existence of the
communication between the sender and the receiver. In distinction to Cryptography, where the opponent is permitted to detect,
interrupt and alter messages without being able to breach definite security grounds guaranteed by the cryptosystem, the prime
objective of Stegnography is to conceal messages inside other risk-free messages in a manner that does not agree to any enemy to even
sense that there is any second message present. Nowadays, it is an emerging area which is used for secured data transmission over any
public medium such as internet. In this research a novel approach of image stegnography based on LSB (Least Significant Bit)
insertion and cryptography method for the lossless jpeg images has been projected. This paper is comprising an application which
ranks images in a users library on the basis of their appropriateness as cover objects for some facts. Here, the data is matched to an
image, so there is a less possibility of an invader being able to employ steganalysis to recuperate the data. Furthermore, the application
first encrypts the data by means of cryptography and message bits that are to be hidden are embedded into the image using Least
Significant Bits insertion technique. Moreover, interpolation is used to increase the density
Website security is a critical issue that needs to be considered in the web, in order to run your online business healthy and
smoothly. It is very difficult situation when security of website is compromised when a brute force or other kind of attacker attacks on
your web creation. It not only consume all your resources but create heavy log dumps on the server which causes your website stop
working.
Recent studies have suggested some backup and recovery modules that should be installed into your website which can take timely
backups of your website to 3rd party servers which are not under the scope of attacker. The Study also suggested different type of
recovery methods such as incremental backups, decremental backups, differential backups and remote backup.
Moreover these studies also suggested that Rsync is used to reduce the transferred data efficiently. The experimental results show
that the remote backup and recovery system can work fast and it can meet the requirements of website protection. The automatic backup
and recovery system for Web site not only plays an important role in the web defence system but also is the last line for disaster
recovery.
This paper suggests different kind of approaches that can be incorporated in the WordPress CMS to make it healthy, secure and
prepared web attacks. The paper suggests various possibilities of the attacks that can be made on CMS and some of the possible
solutions as well as preventive mechanisms.
Some of the proposed security measures –
1. Secret login screen
2. Blocking bad boats
3. Changing db. prefixes
4. Protecting configuration files
5. 2 factor security
6. Flight mode in Web Servers
7. Protecting htaccess file itself
8. Detecting vulnerabilities
9. Unauthorized access made to the system checker
However, this is to be done by balancing the trade-off between website security and backup recovery modules of a website, as measures
taken to secure web page should not affect the user‟s experience and recovery modules
Classification and Searching in Java API Reference DocumentationEditor IJCATR
Application Program Interface (API) allows programmers to use predefined functions instead of writing them from scratch.
Description of API elements that is Methods, Classes, Constructors etc. is provided through API Reference Documentation. Hence
API Reference Documentation acts as a guide to user or developer to use API’s. Different types of Knowledge Types are generated by
processing this API Reference Documentation. And this Knowledge Types will be used for Classification and Searching of Java API
Reference Documentation
Software test-case generation is the process of identifying a set of test cases. It is necessary to generate the test sequence that satisfies the testing criteria. For solving this kind of difficult problem there were a lot of research works, which have been done in the past. The length of the test sequence plays an important role in software testing. The length of test sequence decides whether the sufficient testing is carried or not. Many existing test sequence generation techniques uses genetic algorithm for test-case generation in software testing. The Genetic Algorithm (GA) is an optimization heuristic technique that is implemented through evolution and fitness function. It generates new test cases from the existing test sequence. Further to improve the existing techniques, a new technique is proposed in this paper which combines the tabu search algorithm and the genetic algorithm. The hybrid technique combines the strength of the two meta-heuristic methods and produces efficient test- case sequence.
Multimedia Video transmission is over Wireless Local Area Networks is expected to be an important component of many
emerging multimedia applications. However, Wireless networks will always be bandwidth limited compared to fixed networks due to
background noise, limited frequency spectrum, and varying degrees of network coverage and signal strength One of the critical issues
for multimedia applications is to ensure that the Quality of Service (QoS) requirement to be maintained at an acceptable level. Modern
mobile devices are equipped with multiple network interfaces, including 3G/LTE WiFi. Bandwidth aggregation over LTE and WiFi
links offers an attractive opportunity of supporting bandwidth-intensive services, such as high-quality video streaming, on mobile
devices. Achieving effective bandwidth aggregation in wireless environments raises several challenges related to deployment, link
heterogeneity, Network congestion, network fluctuation, and energy consumption. In this work, an overview of schemes for video
transmission over wireless networks is presented where an acceptable quality of service (QoS) for video applications required realtime
video transmission is achieved
Different Approach of VIDEO Compression Technique: A StudyEditor IJCATR
The main objective of video compression is to achieve video compression with less possible losses to reduce the
transmission bandwidth and storage memory. This paper discusses different approach of video compression for better transmission of
video frames for multimedia application. Video compression methods such as frame difference approach, PCA based method,
accordion function, fuzzy concept, and EZW and FSBM were analyzed in this paper. Those methods were compared for performance,
speed and accuracy and which method produces better visual quality.
Designing of Semantic Nearest Neighbor Search: SurveyEditor IJCATR
Conventional spatial queries, such as range search and nearest neighbor retrieval, involve only conditions on objects’
geometric properties. Today, many modern applications call for novel forms of queries that aim to find objects satisfying both a spatial
predicate, and a predicate on their associated texts. For example, instead of considering all the restaurants, a nearest neighbor query
would instead ask for the restaurant that is the closest among those whose menus contain ―steak, spaghetti, brandy‖ all at the same
time. Currently the best solution to such queries is based on the IR2-tree, which, as shown in this paper, has a few deficiencies that
seriously impact its efficiency. Motivated by this, we develop a new access method called the spatial inverted index that extends the
conventional inverted index to cope with multidimensional data, and comes with algorithms that can answer nearest neighbor queries
with keywords in real time. As verified by experiments, the proposed techniques outperform the IR2-tree in query response time
significantly, often by a factor of orders of magnitude.
A Review on a web based Punjabi t o English Machine Transliteration SystemEditor IJCATR
The paper presents the transliteration of noun phrases from Punjabi to English using statistical machine translation
approach.Transliteration maps the letters of source scrip
ts to letters of another language.Forward transliteration converts an original
word or phrase in the source language into a word in the target language.Backward transliteration is the reverse process that
converts
the transliterated word or phrase back int
o its original word or phrase.Transliteration is an important part of research in NLP.Natural
Language Processing (NLP) is the ability of a
computer program to understand human speech as it is spoken.NLP is an important
component of AI.Artificial Intellig
ence is a branch of science which deals with helping machines find solutions to complex programs
in a human like fashion.The transliteration system is going to developed using SMT.Statistical Machine Translation (SMT) is a
data
oriented statistical framewo
rk for translating text from one natural language to another based on the knowledge
Dielectric and Thermal Characterization of Insulating Organic Varnish Used in...Editor IJCATR
In recent days, a lot of attention was being drawn towards the polymer nanocomposites for use in electrical applications due
to encouraging results obtained for their dielectric properties. Polymer nanocomposites were commonly defined as a combination of
polymer matrix and additives that have at least one dimension in the nanometer range scale. Carbon nanotubes were of a special
interest as the possible organic component in such a composite coating. The carbon atoms were arranged in a hexagonal network and
then rolled up to form a seamless cylinder which measures several nanometers across, but can be thousands of nanometers long. There
were many different types, but the two main categories are single-walled nanotubes (SWNTs) and multi-walled nanotubes (MWNTs),
which are made from multiple layers of graphite. Carbon nanotubes were an example of a nanostructure varying in size from 1-100
nanometers (the scale of atoms and molecules). Nano composites were one of the fastest growing fields in nanotechnology. Extensive
literature survey has been done on the nanocomposites, synthesis and preparation of nano filler. The following objectives were set
based on the literature survey and understanding the technology.
Complete study of Organic varnish and CNT
Chemical properties
Electrical properties
Thermal properties
Mechanical properties
Synthesis and characterization of carbon nanotubes
Preparation of polymer nanocomposites
Study of characteristics of the nanocomposite insulation
Dimensioning an insulation system requires exact knowledge of the type, magnitude and duration of the electric stress while
simultaneously considering the ambient conditions. But, on the other hand, properties of the insulating materials in question must also
be known, so that in addition to the proper material, the optimum, e.g. the most economical design of the insulation system must be
chosen.
IA Literature Review on Synthesis and Characterization of enamelled copper wi...Editor IJCATR
This paper discusses about the survey on the various magazines, conference papers and journals for understanding the
properties of enamelled copper wires mixed with nano fillers, fundamental methods for synthesis and characterization of carbon
nanotubes. From all these papers, it was noted that the research work carried out in an enamelled copper wires filled with nano fillers
has shown better results. It was also recorded that the research work was carried mostly with single metal catalysts and very little
amount of research work has been carried out on the synthesis of carbon nanotubes using bimetallic catalysts.
Spam filtering by using Genetic based Feature SelectionEditor IJCATR
Spam is defined as redundant and unwanted electronica letters, and nowadays, it has created many problems in business life such as
occupying networks bandwidth and the space of user’s mailbox. Due to these problems, much research has been carried out in this
regard by using classification technique. The resent research show that feature selection can have positive effect on the efficiency of
machine learning algorithm. Most algorithms try to present a data model depending on certain detection of small set of features.
Unrelated features in the process of making model result in weak estimation and more computations. In this research it has been tried
to evaluate spam detection in legal electronica letters, and their effect on several Machin learning algorithms through presenting a
feature selection method based on genetic algorithm. Bayesian network and KNN classifiers have been taken into account in
classification phase and spam base dataset is used.
Design of STT-RAM cell in 45nm hybrid CMOS/MTJ processEditor IJCATR
This paper evaluates the performance of Spin-Torque Transfer Random Access Memory (STT-RAM) basic memory cell
configurations in 45nm hybrid CMOS/MTJ process. Switching speed and current drawn by the cells have been calculated and
compared. Cell design has been done using cadence tools. The results obtained show good agreement with theoretical results.
Traditional interpreted data-flow analysis is executed on whole plans; however, such whole-program psychoanalysis is not
executable for large or uncompleted plans. We suggest fragment data-flow analysis as a substitute approach which
calculates data-flow information for a particular program fragment. The psychoanalysis is parameterized by the extra
information available about the rest of the program. We depict two frameworks for interracial flow-sensitive fragment
psychoanalysis, the relationship amongst fragment psychoanalysis and whole-program analysis, and the necessities ensuring fragment analysis safety and feasibility. We suggest an application of fragment analysis as a second analysis phase after a cheap flow-insensitive whole-program analysis, in order to obtain better data for important program fragments. We also depict the design of two fragment analyses derived from an already existing whole-program flow- and context-sensitive pointer alias analysis for Computer program and present empirical rating of their cost and precision. Our experiments show evidence of dramatic improves precision gettable at a practical cost.
Many organization, programmers, and researchers need to debug, test and make maintenance for a segment of their source code to improve their system. Program slicing is one of the best techniques to do so. There are many slicing techniques available to solve such problems such as static slicing, dynamic slicing, and amorphous slicing. In our paper, we decided to develop a tool that supports many slicing techniques. Our proposed tool provides new flexible ways to process simple segments of Java code, and it generates needed slicing according to the user needs, our tool will provide the user with direct and indirect dependencies for each variable in the code segments. This tool can work under various operating systems and does not need particular environments. Thus, our tool is helpful in many aspects such as debugging, testing, education, and many other elements.
The article describes 7 types of metrics and more than 50 their representatives, provides a detailed description and calculation algorithms used. It also touches upon the role of metrics in software development.
Program analysis is useful for debugging, testing and maintenance of software systems due to information
about the structure and relationship of the program’s modules . In general, program analysis is performed
either based on control flow graph or dependence graph. However, in the case of aspect-oriented
programming (AOP), control flow graph (CFG) or dependence graph (DG) are not enough to model the
properties of Aspect-oriented (AO) programs. With respect to AO programs, although AOP is good for
modular representation and crosscutting concern, suitable model for program analysis is required to
gather information on its structure for the purpose of minimizing maintenance effort. In this paper Aspect
Oriented Dependence Flow Graph (AODFG) as an intermediate representation model is proposed to
represent the structure of aspect-oriented programs. AODFG is formed by merging the CFG and DG, thus
more information about dependencies between the join points, advice, aspects and their associated
construct with the flow of control from one statement to another are gathered. We discussthe performance
of AODFG by analysing some examples of AspectJ program taken from AspectJ Development Tools
(AJDT).
Duplicate Code Detection using Control StatementsEditor IJCATR
Code clone detection is an important area of research as reusability is a key factor in software evolution. Duplicate code
degrades the design and structure of software and software qualities like readability, changeability, maintainability. Code clone
increases the maintenance cost as incorrect changes in copied code may lead to more errors. In this paper we address structural code
similarity detection and propose new methods to detect structural clones using structure of control statements. By structure we mean
order of control statements used in the source code. We have considered two orders of control structures: (i) Sequence of control
statements as it appears (ii) Execution flow of control statements.
Duplicate Code Detection using Control StatementsEditor IJCATR
Code clone detection is an important area of research as reusability is a key factor in software evolution. Duplicate code
degrades the design and structure of software and software qualities like readability, changeability, maintainability. Code clone
increases the maintenance cost as incorrect changes in copied code may lead to more errors. In this paper we address structural code
similarity detection and propose new methods to detect structural clones using structure of control statements. By structure we mean
order of control statements used in the source code. We have considered two orders of control structures: (i) Sequence of control
statements as it appears (ii) Execution flow of control statements.
Duplicate Code Detection using Control StatementsEditor IJCATR
Code clone detection is an important area of research as reusability is a key factor in software evolution. Duplicate code
degrades the design and structure of software and software qualities like readability, changeability, maintainability. Code clone
increases the maintenance cost as incorrect changes in copied code may lead to more errors. In this paper we address structural code
similarity detection and propose new methods to detect structural clones using structure of control statements. By structure we mean
order of control statements used in the source code. We have considered two orders of control structures: (i) Sequence of control
statements as it appears (ii) Execution flow of control statements.
Duplicate Code Detection using Control StatementsEditor IJCATR
Code clone detection is an important area of research as reusability is a key factor in software evolution. Duplicate code degrades the design and structure of software and software qualities like readability, changeability, maintainability. Code clone increases the maintenance cost as incorrect changes in copied code may lead to more errors. In this paper we address structural code similarity detection and propose new methods to detect structural clones using structure of control statements. By structure we mean order of control statements used in the source code. We have considered two orders of control structures: (i) Sequence of control statements as it appears (ii) Execution flow of control statements.
How to calculte Cyclomatic Complexity through various methodsharsimratDeo
A brief description about cyclomatic complexity is given.Various methods along with an example is described to calculate it. Advantages and disadvantages are also mentioned.
Similar to A Comparative Analysis of Slicing for Structured Programs (20)
Text Mining in Digital Libraries using OKAPI BM25 ModelEditor IJCATR
The emergence of the internet has made vast amounts of information available and easily accessible online. As a result, most libraries have digitized their content in order to remain relevant to their users and to keep pace with the advancement of the internet. However, these digital libraries have been criticized for using inefficient information retrieval models that do not perform relevance ranking to the retrieved results. This paper proposed the use of OKAPI BM25 model in text mining so as means of improving relevance ranking of digital libraries. Okapi BM25 model was selected because it is a probability-based relevance ranking algorithm. A case study research was conducted and the model design was based on information retrieval processes. The performance of Boolean, vector space, and Okapi BM25 models was compared for data retrieval. Relevant ranked documents were retrieved and displayed at the OPAC framework search page. The results revealed that Okapi BM 25 outperformed Boolean model and Vector Space model. Therefore, this paper proposes the use of Okapi BM25 model to reward terms according to their relative frequencies in a document so as to improve the performance of text mining in digital libraries.
Green Computing, eco trends, climate change, e-waste and eco-friendlyEditor IJCATR
This study focused on the practice of using computing resources more efficiently while maintaining or increasing overall performance. Sustainable IT services require the integration of green computing practices such as power management, virtualization, improving cooling technology, recycling, electronic waste disposal, and optimization of the IT infrastructure to meet sustainability requirements. Studies have shown that costs of power utilized by IT departments can approach 50% of the overall energy costs for an organization. While there is an expectation that green IT should lower costs and the firm’s impact on the environment, there has been far less attention directed at understanding the strategic benefits of sustainable IT services in terms of the creation of customer value, business value and societal value. This paper provides a review of the literature on sustainable IT, key areas of focus, and identifies a core set of principles to guide sustainable IT service design.
Policies for Green Computing and E-Waste in NigeriaEditor IJCATR
Computers today are an integral part of individuals’ lives all around the world, but unfortunately these devices are toxic to the environment given the materials used, their limited battery life and technological obsolescence. Individuals are concerned about the hazardous materials ever present in computers, even if the importance of various attributes differs, and that a more environment -friendly attitude can be obtained through exposure to educational materials. In this paper, we aim to delineate the problem of e-waste in Nigeria and highlight a series of measures and the advantage they herald for our country and propose a series of action steps to develop in these areas further. It is possible for Nigeria to have an immediate economic stimulus and job creation while moving quickly to abide by the requirements of climate change legislation and energy efficiency directives. The costs of implementing energy efficiency and renewable energy measures are minimal as they are not cash expenditures but rather investments paid back by future, continuous energy savings.
Performance Evaluation of VANETs for Evaluating Node Stability in Dynamic Sce...Editor IJCATR
Vehicular ad hoc networks (VANETs) are a favorable area of exploration which empowers the interconnection amid the movable vehicles and between transportable units (vehicles) and road side units (RSU). In Vehicular Ad Hoc Networks (VANETs), mobile vehicles can be organized into assemblage to promote interconnection links. The assemblage arrangement according to dimensions and geographical extend has serious influence on attribute of interaction .Vehicular ad hoc networks (VANETs) are subclass of mobile Ad-hoc network involving more complex mobility patterns. Because of mobility the topology changes very frequently. This raises a number of technical challenges including the stability of the network .There is a need for assemblage configuration leading to more stable realistic network. The paper provides investigation of various simulation scenarios in which cluster using k-means algorithm are generated and their numbers are varied to find the more stable configuration in real scenario of road.
Optimum Location of DG Units Considering Operation ConditionsEditor IJCATR
The optimal sizing and placement of Distributed Generation units (DG) are becoming very attractive to researchers these days. In this paper a two stage approach has been used for allocation and sizing of DGs in distribution system with time varying load model. The strategic placement of DGs can help in reducing energy losses and improving voltage profile. The proposed work discusses time varying loads that can be useful for selecting the location and optimizing DG operation. The method has the potential to be used for integrating the available DGs by identifying the best locations in a power system. The proposed method has been demonstrated on 9-bus test system.
Analysis of Comparison of Fuzzy Knn, C4.5 Algorithm, and Naïve Bayes Classifi...Editor IJCATR
Early detection of diabetes mellitus (DM) can prevent or inhibit complication. There are several laboratory test that must be done to detect DM. The result of this laboratory test then converted into data training. Data training used in this study generated from UCI Pima Database with 6 attributes that were used to classify positive or negative diabetes. There are various classification methods that are commonly used, and in this study three of them were compared, which were fuzzy KNN, C4.5 algorithm and Naïve Bayes Classifier (NBC) with one identical case. The objective of this study was to create software to classify DM using tested methods and compared the three methods based on accuracy, precision, and recall. The results showed that the best method was Fuzzy KNN with average and maximum accuracy reached 96% and 98%, respectively. In second place, NBC method had respective average and maximum accuracy of 87.5% and 90%. Lastly, C4.5 algorithm had average and maximum accuracy of 79.5% and 86%, respectively.
Web Scraping for Estimating new Record from Source SiteEditor IJCATR
Study in the Competitive field of Intelligent, and studies in the field of Web Scraping, have a symbiotic relationship mutualism. In the information age today, the website serves as a main source. The research focus is on how to get data from websites and how to slow down the intensity of the download. The problem that arises is the website sources are autonomous so that vulnerable changes the structure of the content at any time. The next problem is the system intrusion detection snort installed on the server to detect bot crawler. So the researchers propose the use of the methods of Mining Data Records and the method of Exponential Smoothing so that adaptive to changes in the structure of the content and do a browse or fetch automatically follow the pattern of the occurrences of the news. The results of the tests, with the threshold 0.3 for MDR and similarity threshold score 0.65 for STM, using recall and precision values produce f-measure average 92.6%. While the results of the tests of the exponential estimation smoothing using ? = 0.5 produces MAE 18.2 datarecord duplicate. It slowed down to 3.6 datarecord from 21.8 datarecord results schedule download/fetch fix in an average time of occurrence news.
Evaluating Semantic Similarity between Biomedical Concepts/Classes through S...Editor IJCATR
Most of the existing semantic similarity measures that use ontology structure as their primary source can measure semantic similarity between concepts/classes using single ontology. The ontology-based semantic similarity techniques such as structure-based semantic similarity techniques (Path Length Measure, Wu and Palmer’s Measure, and Leacock and Chodorow’s measure), information content-based similarity techniques (Resnik’s measure, Lin’s measure), and biomedical domain ontology techniques (Al-Mubaid and Nguyen’s measure (SimDist)) were evaluated relative to human experts’ ratings, and compared on sets of concepts using the ICD-10 “V1.0” terminology within the UMLS. The experimental results validate the efficiency of the SemDist technique in single ontology, and demonstrate that SemDist semantic similarity techniques, compared with the existing techniques, gives the best overall results of correlation with experts’ ratings.
Semantic Similarity Measures between Terms in the Biomedical Domain within f...Editor IJCATR
The techniques and tests are tools used to define how measure the goodness of ontology or its resources. The similarity between biomedical classes/concepts is an important task for the biomedical information extraction and knowledge discovery. However, most of the semantic similarity techniques can be adopted to be used in the biomedical domain (UMLS). Many experiments have been conducted to check the applicability of these measures. In this paper, we investigate to measure semantic similarity between two terms within single ontology or multiple ontologies in ICD-10 “V1.0” as primary source, and compare my results to human experts score by correlation coefficient.
A Strategy for Improving the Performance of Small Files in Openstack Swift Editor IJCATR
This is an effective way to improve the storage access performance of small files in Openstack Swift by adding an aggregate storage module. Because Swift will lead to too much disk operation when querying metadata, the transfer performance of plenty of small files is low. In this paper, we propose an aggregated storage strategy (ASS), and implement it in Swift. ASS comprises two parts which include merge storage and index storage. At the first stage, ASS arranges the write request queue in chronological order, and then stores objects in volumes. These volumes are large files that are stored in Swift actually. During the short encounter time, the object-to-volume mapping information is stored in Key-Value store at the second stage. The experimental results show that the ASS can effectively improve Swift's small file transfer performance.
Integrated System for Vehicle Clearance and RegistrationEditor IJCATR
Efficient management and control of government's cash resources rely on government banking arrangements. Nigeria, like many low income countries, employed fragmented systems in handling government receipts and payments. Later in 2016, Nigeria implemented a unified structure as recommended by the IMF, where all government funds are collected in one account would reduce borrowing costs, extend credit and improve government's fiscal policy among other benefits to government. This situation motivated us to embark on this research to design and implement an integrated system for vehicle clearance and registration. This system complies with the new Treasury Single Account policy to enable proper interaction and collaboration among five different level agencies (NCS, FRSC, SBIR, VIO and NPF) saddled with vehicular administration and activities in Nigeria. Since the system is web based, Object Oriented Hypermedia Design Methodology (OOHDM) is used. Tools such as Php, JavaScript, css, html, AJAX and other web development technologies were used. The result is a web based system that gives proper information about a vehicle starting from the exact date of importation to registration and renewal of licensing. Vehicle owner information, custom duty information, plate number registration details, etc. will also be efficiently retrieved from the system by any of the agencies without contacting the other agency at any point in time. Also number plate will no longer be the only means of vehicle identification as it is presently the case in Nigeria, because the unified system will automatically generate and assigned a Unique Vehicle Identification Pin Number (UVIPN) on payment of duty in the system to the vehicle and the UVIPN will be linked to the various agencies in the management information system.
Assessment of the Efficiency of Customer Order Management System: A Case Stu...Editor IJCATR
The Supermarket Management System deals with the automation of buying and selling of good and services. It includes both sales and purchase of items. The project Supermarket Management System is to be developed with the objective of making the system reliable, easier, fast, and more informative.
Energy-Aware Routing in Wireless Sensor Network Using Modified Bi-Directional A*Editor IJCATR
Energy is a key component in the Wireless Sensor Network (WSN)[1]. The system will not be able to run according to its function without the availability of adequate power units. One of the characteristics of wireless sensor network is Limitation energy[2]. A lot of research has been done to develop strategies to overcome this problem. One of them is clustering technique. The popular clustering technique is Low Energy Adaptive Clustering Hierarchy (LEACH)[3]. In LEACH, clustering techniques are used to determine Cluster Head (CH), which will then be assigned to forward packets to Base Station (BS). In this research, we propose other clustering techniques, which utilize the Social Network Analysis approach theory of Betweeness Centrality (BC) which will then be implemented in the Setup phase. While in the Steady-State phase, one of the heuristic searching algorithms, Modified Bi-Directional A* (MBDA *) is implemented. The experiment was performed deploy 100 nodes statically in the 100x100 area, with one Base Station at coordinates (50,50). To find out the reliability of the system, the experiment to do in 5000 rounds. The performance of the designed routing protocol strategy will be tested based on network lifetime, throughput, and residual energy. The results show that BC-MBDA * is better than LEACH. This is influenced by the ways of working LEACH in determining the CH that is dynamic, which is always changing in every data transmission process. This will result in the use of energy, because they always doing any computation to determine CH in every transmission process. In contrast to BC-MBDA *, CH is statically determined, so it can decrease energy usage.
Security in Software Defined Networks (SDN): Challenges and Research Opportun...Editor IJCATR
In networks, the rapidly changing traffic patterns of search engines, Internet of Things (IoT) devices, Big Data and data centers has thrown up new challenges for legacy; existing networks; and prompted the need for a more intelligent and innovative way to dynamically manage traffic and allocate limited network resources. Software Defined Network (SDN) which decouples the control plane from the data plane through network vitalizations aims to address these challenges. This paper has explored the SDN architecture and its implementation with the OpenFlow protocol. It has also assessed some of its benefits over traditional network architectures, security concerns and how it can be addressed in future research and related works in emerging economies such as Nigeria.
Measure the Similarity of Complaint Document Using Cosine Similarity Based on...Editor IJCATR
Report handling on "LAPOR!" (Laporan, Aspirasi dan Pengaduan Online Rakyat) system depending on the system administrator who manually reads every incoming report [3]. Read manually can lead to errors in handling complaints [4] if the data flow is huge and grows rapidly, it needs at least three days to prepare a confirmation and it sensitive to inconsistencies [3]. In this study, the authors propose a model that can measure the identities of the Query (Incoming) with Document (Archive). The authors employed Class-Based Indexing term weighting scheme, and Cosine Similarities to analyse document similarities. CoSimTFIDF, CoSimTFICF and CoSimTFIDFICF values used in classification as feature for K-Nearest Neighbour (K-NN) classifier. The optimum result evaluation is pre-processing employ 75% of training data ratio and 25% of test data with CoSimTFIDF feature. It deliver a high accuracy 84%. The k = 5 value obtain high accuracy 84.12%
Hangul Recognition Using Support Vector MachineEditor IJCATR
The recognition of Hangul Image is more difficult compared with that of Latin. It could be recognized from the structural arrangement. Hangul is arranged from two dimensions while Latin is only from the left to the right. The current research creates a system to convert Hangul image into Latin text in order to use it as a learning material on reading Hangul. In general, image recognition system is divided into three steps. The first step is preprocessing, which includes binarization, segmentation through connected component-labeling method, and thinning with Zhang Suen to decrease some pattern information. The second is receiving the feature from every single image, whose identification process is done through chain code method. The third is recognizing the process using Support Vector Machine (SVM) with some kernels. It works through letter image and Hangul word recognition. It consists of 34 letters, each of which has 15 different patterns. The whole patterns are 510, divided into 3 data scenarios. The highest result achieved is 94,7% using SVM kernel polynomial and radial basis function. The level of recognition result is influenced by many trained data. Whilst the recognition process of Hangul word applies to the type 2 Hangul word with 6 different patterns. The difference of these patterns appears from the change of the font type. The chosen fonts for data training are such as Batang, Dotum, Gaeul, Gulim, Malgun Gothic. Arial Unicode MS is used to test the data. The lowest accuracy is achieved through the use of SVM kernel radial basis function, which is 69%. The same result, 72 %, is given by the SVM kernel linear and polynomial.
Application of 3D Printing in EducationEditor IJCATR
This paper provides a review of literature concerning the application of 3D printing in the education system. The review identifies that 3D Printing is being applied across the Educational levels [1] as well as in Libraries, Laboratories, and Distance education systems. The review also finds that 3D Printing is being used to teach both students and trainers about 3D Printing and to develop 3D Printing skills.
Survey on Energy-Efficient Routing Algorithms for Underwater Wireless Sensor ...Editor IJCATR
In underwater environment, for retrieval of information the routing mechanism is used. In routing mechanism there are three to four types of nodes are used, one is sink node which is deployed on the water surface and can collect the information, courier/super/AUV or dolphin powerful nodes are deployed in the middle of the water for forwarding the packets, ordinary nodes are also forwarder nodes which can be deployed from bottom to surface of the water and source nodes are deployed at the seabed which can extract the valuable information from the bottom of the sea. In underwater environment the battery power of the nodes is limited and that power can be enhanced through better selection of the routing algorithm. This paper focuses the energy-efficient routing algorithms for their routing mechanisms to prolong the battery power of the nodes. This paper also focuses the performance analysis of the energy-efficient algorithms under which we can examine the better performance of the route selection mechanism which can prolong the battery power of the node
Comparative analysis on Void Node Removal Routing algorithms for Underwater W...Editor IJCATR
The designing of routing algorithms faces many challenges in underwater environment like: propagation delay, acoustic channel behaviour, limited bandwidth, high bit error rate, limited battery power, underwater pressure, node mobility, localization 3D deployment, and underwater obstacles (voids). This paper focuses the underwater voids which affects the overall performance of the entire network. The majority of the researchers have used the better approaches for removal of voids through alternate path selection mechanism but still research needs improvement. This paper also focuses the architecture and its operation through merits and demerits of the existing algorithms. This research article further focuses the analytical method of the performance analysis of existing algorithms through which we found the better approach for removal of voids
Decay Property for Solutions to Plate Type Equations with Variable CoefficientsEditor IJCATR
In this paper we consider the initial value problem for a plate type equation with variable coefficients and memory in
1 n R n ), which is of regularity-loss property. By using spectrally resolution, we study the pointwise estimates in the spectral
space of the fundamental solution to the corresponding linear problem. Appealing to this pointwise estimates, we obtain the global
existence and the decay estimates of solutions to the semilinear problem by employing the fixed point theorem
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
PHP Frameworks: I want to break free (IPC Berlin 2024)Ralf Eggert
In this presentation, we examine the challenges and limitations of relying too heavily on PHP frameworks in web development. We discuss the history of PHP and its frameworks to understand how this dependence has evolved. The focus will be on providing concrete tips and strategies to reduce reliance on these frameworks, based on real-world examples and practical considerations. The goal is to equip developers with the skills and knowledge to create more flexible and future-proof web applications. We'll explore the importance of maintaining autonomy in a rapidly changing tech landscape and how to make informed decisions in PHP development.
This talk is aimed at encouraging a more independent approach to using PHP frameworks, moving towards a more flexible and future-proof approach to PHP development.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...
A Comparative Analysis of Slicing for Structured Programs
1. International Journal of Computer Applications Technology and Research
Volume 3– Issue 10, 634 - 639, 2014
A Comparative Analysis of Slicing for
Structured Programs
Lipika Jha K.S.Patnaik
Department of Computer Science Department of Computer Science
and engineering, and engineering,
Birla Institute of Technology, Birla Institute of Technology,
Mesra, Ranchi, India Mesra, Ranchi, India
Abstract: Program Slicing is a method for automatically decomposing programs by analyzing their data flow and control flow.
Slicing reduces the program to a minimal form called “slice” which still produces that behavior. Program slice singles out all
statements that may have affected the value of a given variable at a specific program point. Slicing is useful in program
debugging, program maintenance and other applications that involve understanding program behavior. In this paper we have
discuss the static and dynamic slicing and its comparison by taking number of examples.
Keywords: Slicing techniques, control and data dependence, data flow equation, control flow graph, program dependence graph
1. INTRODUCTION
Program slicing is one of the techniques of
program analysis. It is an alternative approach to
develop reusable components from existing
software. To extract reusable functions from ill-structured
programs we need a decomposition
method which is able to group non sequential
sets of statement. Program is decomposed based
on program analysis. A program is analyzed
using data flow and control flow. The
decomposed program is called slice which is
obtained by iteratively solving data flow
equations based on a program flow graph.
Program analysis uses program statement
dependence information (i.e. data and control
dependence) to identify parts of a program that
influence or are influenced by a variable at
particular point of interest is called the slicing
criterion.
Slicing Criterion:
C = (n, V)
where n is a statement in program P and V is a
variable in P .A slice S consists of all statements
in program P that may affect the value of
variable V at some point n.
Program slicing describes a mechanism or tool
which allows the automatic generation of a slice.
All statements affecting or affected by the
variables V in n mentioned in the slicing
criterion becomes a part of the slice.
This paper gives the detail description of
slicing techniques and comparison of different
slicing. The paper is organized as follows.
Section 2 defines some common slicing
techniques. Section 3 defines static and dynamic
slicing .Section 4 defines comparison of
different slicing, section 5 presents conclusion
and finally acknowledgement and references.
2. SLICING TECHNIQUE
The original concept of a program slice was
introduced by Weiser [1] .He claims that a slice
corresponds to the mental abstractions that
www.ijcat.com 634
2. International Journal of Computer Applications Technology and Research
Volume 3– Issue 10, 634 - 639, 2014
people make when they are debugging a
program. A slice S consists of all statements in
program P that may affect the value of variable
V at some point n.
Variables V at statements n can be affected
by statements because:
– Statements which control the
execution of n(Control
Dependence)
– Statements which uses the V at
n (Data Dependence)
The goal of slicing is to create a subprogram of
the program (by eliminating some statements),
such that the projection and the original program
compute the same values for all variables in V at
point n.
The computation of a slicing is done using data
dependence and control dependence. Data
dependence and control dependence are defined
in terms of the CFG of a program. A control
flow graph is a graph which is interpreted as a
program procedure. The nodes of a graph
represent program statements and edges
represent control flow transfers between
statements.
A Control Flow Graph for program P is a graph
in which each node is associated with a
statement from P and the edges represent the
flow of control in P. With each node n (i.e., each
statement in the program) associate two sets:
USE (n), the set of variables whose values are
used at n, and DEF (n), the set of variables
whose values are defined at n.
There are three main techniques to compute
slice:
2.1 Data dependence (DD) and control
dependence (CD)
Data dependence and control dependence are
defined in terms of the CFG of program. A
statement j is data dependent on statement i if a
value computed at i is used at j in some program
execution. A data dependence may be defined
as; there exists a variable x such that
(i) x ε DEF(i) and x ε USE(j)
or
x ε DEF(i) and x ε DEF(j)
(ii) There exists a path from i to j without
intervening definitions of x.
Control dependence information identifies the
conditionals node that may affect execution of a
node in the slice. In a CFG, a node j post-dominates
a node i if and only if j is a member
of any path from i to Stop. Node i is control-dependent
on j in program P if
1. there exists a path P from i to j such that j post
dominates every node in P.
2. i is not post dominated by j.
SC ={m | n . n є C and m→*n}.
Or
Sc = DD U CD
2.2 Data Flow Equation
According to Weiser the slicing is computed by
iteratively solving data flow equation. He
defines slice as an executable program that is
obtained from the original program by deleting
zero or more statement. At least one slice exists
for any criterion for any program that is the
program itself.
The set of relevant variable RO
C(i) with respect
to slicing criterion C=(p,V) is:
1. RO
C(i)=V when i=p
2. RO
C(i)= (RO
C(j)-DEF(i))U (USE(i) if
RO
C(j)∩DEF(i)≠ø)
The set of relevant statements to C denoted
S0
C,is defined as:
={i | Def(i) ∩
(j) ≠ ø ,i → CFG j}
={3,4,9}
The set of conditional statements which control
the execution of the statements in S0
C,denoted
B0
C is defined as:
B0
c={bєG | Infl(b) ∩ S0
C≠ ø }
Infl(5)={6,7,8}
The sliced program Sc is defined recursively on
the set of variables and statements which have
either direct or indirect influence on V. Starting
from zero, the superscripts represent the level of
recursion.
www.ijcat.com 635
3. International Journal of Computer Applications Technology and Research
Volume 3– Issue 10, 634 - 639, 2014
(n)=
(n)
(n)
={bε G|INFL(b) ∩
≠ ø }
={n ε G|DEF(n) ∩
(j) ≠ ø}U
The sliced program includes the conditional
statements with an indirect influence on a slice,
the control variables which are evaluated in the
logical expression , and the statements which
influence the control variables.
2.3 Program dependence graph
Program dependence graph is defined in terms
of a program’s control flow graph .The PDG
includes the same set of vertices as the CFG,
excluding the EXIT vertex. The edges of the
PDG represent the control and flow dependence
induced by the CFG.
The extraction of slices is based on data
dependence and control dependence. A slice is
directly obtained by a linear time walk
backwards from some point in the graph,
visiting all predecessors.
3. TYPES OF SLICING
3.1 Static Slicing
It includes all the statements that affect variable
v or affected by the variable at the point of
interest (i.e., at the statement x). It is computed
by finding consecutive sets of indirectly relevant
statements, according to data and control
dependencies. Static slicing criterion consists of
a pair (n, V) where n is point of interest and V is
a variable in a program based on which program
will be sliced.
3.2 Dynamic Slicing
A dynamic program slice includes all statements
that affect the value of the variable occurrence
for the given program inputs, not all statements
that did affect its value. Dynamic slicing
criterion consist of a triple (n, V, I) where I is an
input to the program. In static slicing, only
statically available information is used for
computing slices.
4. COMPARISON USING DATA FLOW
EQUATION
Example 1: Let us consider a program which
computes the sum and product of first n
numbers, using a single loop.
void main()
1. {int n;
2. cin>>n;
3. if (n>0)
4. int i=1;
5. int sum=o;
6. int product=1;
7. int k;
8. while(i<=n)
9. {cin>>k;
10. sum=sum+k;
11. product=product*k;
12. i=i+1;}
13. cout<<sum;
14. cout<<product;
}
Static slicing: Slicing criterion C is (14, product)
St
at
e
m
en
t
USE() DEF() Ro
c
1 n Ø Ø
2 n Ø Ø
3 N Ø N
4 i Ø i,n
5 sum Ø i,n
6 product Ø product,i,n
7 k product product,i,n
8 i,n product product,i,n
9 k product product,i,n
,
10 sum,k sum product,
k
product,i,n
,k
11 product,
i
product product,
k
product,i,n
,k
12 I i product product,i,n
13 Sum product Product
14 Product product Product
= {6,9,11}
www.ijcat.com 636
4. International Journal of Computer Applications Technology and Research
Volume 3– Issue 10, 634 - 639, 2014
={3,8}
(n)=
(n)
(n)
(3)=n
(14)=n
(13)=n
(12)=n
(11)=n
(10)=n
(9)=n
(8)=n
(7)=n
(6)=n
(5)=n
(4)=n
(2)= Ø
(1)= Ø
(8)=i,n
(12)=i,n
(11)=i,n
(10)=i,n
(9)=i,n
(7)=i,n
(6)=i,n
(5)=i,n
(4)=i,
(3)= Ø
(2)= Ø
(1)= Ø
={12,11,9,6,4,2}
={bε G|INFL(b) ∩
≠ ø }
={3,8}
={n ε G|DEF(n) ∩
(j) ≠ ø}U
={2,3,4,6,8,9,11,12}
Dynamic slicing: Slicing criterion
C:(14,product,n=2)
void main ()
1. {int n;
2. cin>>n;
3. if (n>0)
4. int i=1;
5. int sum =1;
6. int product=1;
7. int k;
81. i<=n
91. cin>>k;
101. sum=sum+k;
111. product=product*k;
121. i=i+1;}
82. i<=n
92. cin>>k;
102. sum=sum+k;
112. product=product*k;
122. i=i+1;
83. i<=n
13. sum=sum+k;
14. cout<<product;
}
DU: {(2,3),(2,81),(2,82),(2,83),(4,81), (4,121),
(121,82 ) (121,122),
(122,83),(6,111),(111,112),(112,14),(91,111),
(92,112)
TC:
{3,4},{3,5},{3,6},{3,7}{3,81},{(81,91),(81,111),
(81,121), (82,92),(82,112), (82,122) }
IR: {(81,82), (81,83), (82,81), (82,83), (83,81),
(83,82)
S0= {112}
A1= {111, 92, 82}
S1= {112, 111, 92, 82}
A2= {6, 91, 81, 2, 83,121}
S2= {112, 111, 92, 82, 6, 91, 81, 2, 83,121}
A3= {4, 121,122}
S3= {112, 111, 92, 82, 6, 91, 81, 2, 83, 4, 121,122}
Example 2: Let us consider a program which
computes the sum and product of first n
numbers, using for loop.
void main()
1. {int n;
2. cin>>n;
3. if (n>0)
4. int sum=0;
5. int product=1;
6. int k;
7. for(i=1;i<=n;i++)
8. {cin>>k;
9. sum=sum+k;
10. product=product*k;
}
11. cout<<sum;
12. cout<<product;
}
Static slicing: Slicing criterion C is (12, product)
St
at
e
m
en
t
USE() DEF()
1 n Ø I
2 n Ø I
3 N Ø i,n
4 sum Ø i,n
5 product Ø i,n
6 k Product product,i,n
7 i,n i Product product,i,n
8 k Product product,i,n
www.ijcat.com 637
5. International Journal of Computer Applications Technology and Research
Volume 3– Issue 10, 634 - 639, 2014
,
9 sum,k sum product,
k
product,i,n
,k
10 product,
k
product product,
k
product,i,n
,k
11 Sum product Product
12 Product product Product
= {11,6,4}
={7,8,9}
(3)=n
(11)=product,n
(10)=product,k,n
(9)=product,k,n
(8)=product,n
(7)=product,n
(6)=product,n
(5)=n
(4)=n
(2)= Ø
(1)= Ø
(7)=i,n
(11)=i,n
(10)=i,n
(9)=i,n
(8)=i,n
(6)=i,n
(5)=i,n
(4)=i,n
(3)= i,n
(2)= i
(1)= i
={10,8,7,5,2}
={3,7}
={2,3,5,7,8,10}
Dynamic slicing: Slicing criterion
C:(12,product,n=2)
void main ()
1. {int n;
2. cin>>n;
3. if (n>0)
4. int sum =1;
5. int product=1;
6. int k;
71. i=1; i<=n
81. cin>>k;
91. sum=sum+k;
101. product=product*k;
72. i<=n
82. cin>>k;
92. sum=sum+k;
102. product=product*k;
73. i<=n
11. cout<< sum;
12. cout<<product;
}
DU: {(2,3),(2,71),(2,72),(2,73),(4,91),(91,92),
(92,11),
(5,101),(101,102),(102,12),(81,91),(81,101),(82,92),
(82,102)}
TC:
{(3,4),(3,5),(3,6),(3,71),(71,81),(71,91),(71,101),
(72,82),(72,92), (72,102) }
IR: {(71,72), (71,73), (72,71), (72,73), (73,71),
(73,72)
S0= {102}
A1= {101, 82, 72}
S1= {102, 101, 82, 72}
A2= {5, 81, 71, 2, 73,101}
S2= {102, 101, 82, 72, 5, 71, 81, 2, 73}
A3= {3}
S3= {102, 101, 82, 72, 5, 71, 81, 2, 73,3}
5. CONCLUSION
The concept of program slicing was originally
defined by Weiser as a method for decomposing
a program into pieces, i.e., slices. The static slice
produced by his method is of the partially
equivalent program type. His solution produces
a static program slice based on solving data flow
equations iteratively on the flow graph of the
program. Here in this paper we have used the
same concept for computing the static and
dynamic slicing. We have taken the number of
example and analyzed the slicing .As the
number of control statement increases in a
program it becomes difficult to compute the
control dependent statement and then the
relevant variable. In for loop we can either
consider the different statement for the three
section or we can consider a single statement.
The output will be same the only difference is
that it will be easy to compute from the different
statement.
6. ACKNOWLEDGEMENT
I wish to convey my sincere gratitude and
appreciation to each and every person who
helped me in writing this paper. I am grateful to
my institution, Birla Institute of Technology and
my colleagues. I would especially like to thank
Dr. K. S. Patnaik, my guide for his advice and
guidance.
www.ijcat.com 638
6. International Journal of Computer Applications Technology and Research
Volume 3– Issue 10, 634 - 639, 2014
7. REFERENCES
[1] M. Weiser, “Program Slicing,” IEEE
Transactions on Software Engineering, Vol. 16,
No. 5, 1984, pp. 498-509.
[2] F. Tip, “A Survey of Program Slicing
Techniques,” Journal of Programming
Languages, Vol. 3, No. 3, 1995, pp. 121-
189
[3] D. Binkley and K. B. Gallagher,
“Program Slicing,” Advances in Computers,
Vol. 43, 1996, pp. 1-50.
[4] B. Korel and J. Laski. Dynamic program
slicing. Information Processing Letters,
29(3):155–163, 1988.
[5] M. Kamkar, “An overview and
comparative classification of program slicing
techniques,” J. Systems Software, vol. 31, pp.
197–214, 1995.
www.ijcat.com 639