The software execution process is tracked with event logs. The event logs are used to maintain the
execution process flow in a textual log file. The log file also manages the error values and their source of classes.
The error values are used to analyze the failure of the software. The data mining methods are used to evaluate the
quality and software failure rate analysis process. The text logs are processed and data values are extracted from
the data values. The data values are mined using the machine learning methods for failure analysis.
The service error, service complaints, interaction error and crash errors are maintained under the log files.
The events and their reactions are also maintained under the log files. Software termination and execution failures
are identified using the log details. The log file parsing process is applied to extract data from the logs. The
associations rule mining methods are used to analyze the log files for failure detection process. The system uses
the Weighted Association Rule Mining (WARM) scheme to fetch failure rate in the software execution flow. The
system improves the failure rate detection accuracy in WARM model.
A LOG-BASED TRACE AND REPLAY TOOL INTEGRATING SOFTWARE AND INFRASTRUCTUREijseajournal
We propose a log-based analysis tool for evaluating web application computer system. A feature of the tool is an integration software log with infrastructure log. Software engineers alone can resolve system faults in the tool, even if the faults are complicated by both software problems and infrastructure problems. The tool consists of 5 steps: preparation software, preparation infrastructure, collecting logs, replaying the log data, and tracing the log data. The tool was applied to a simple web application system in a small-scale
local area network. We confirmed usefulness of the tool when a software engineer detects faults of the system failures such as “404” and “no response” errors. In addition, the tool was partially applied to a real large-scale computer system with many web applications and large network environment. Using the replaying and the tracing in the tool, we found causes of a real authentication error. The causes were combined an infrastructure problem with a software problem. Even if the failure is caused by not only a software problem but also an infrastructure problem, we confirmed that software engineers distinguish between a software problem and an infrastructure problem using the tool.
Genetic fuzzy process metric measurement system for an operating systemijcseit
Operating system (Os) is the most essential software of the computer system,deprived ofit, the computer
system is totally useless. It is the frontier for assessing relevant computer resources. It performance greatly
enhances user overall objective across the system. Related literatures have try in different methods and
techniques to measure the process matric performance of the operating system but none has incorporated
the use of genetic algorithm and fuzzy logic in their varied techniques which indeed is a novel approach.
Extending the work of Michalis, this research focuses on measuring the process matrix performance of an
operating system utilizing set of operating system criteria’s while fusing fuzzy logic to handle
impreciseness and genetic for process optimization.
AN INVESTIGATION OF THE MONITORING ACTIVITY IN SELF ADAPTIVE SYSTEMSijseajournal
Runtime monitoring is essential for the violation detection during the underlying software system execution. In this paper, an investigation of the monitoring activity of MAPE-K control loop is performed which aims at exploring:(1) the architecture of the monitoring activity in terms of the involved components
and control and data flow between them; (2) the standard interface of the monitoring component with other MAPE-K components; (3) the adaptive monitoring and its importance to the monitoring overhead issue; and (4) the monitoring mode and its relevance to some specific situations and systems. This paper also presented a Java framework for the monitoring process for self adaptive systems.
The adoption of cloud environment for various application uses has led to security and privacy concern of user’s data. To protect user data and privacy on such platform is an area of concern.
Many cryptography strategy has been presented to provide secure sharing of resource on cloud platform. These methods tries to achieve a secure authentication strategy to realize feature such as self-blindable access tickets, group signatures, anonymous access tickets, minimal disclosure of tickets and revocation but each one varies in realization of these features. Each feature requires different cryptography mechanism for realization. Due to this it induces computation complexity which affects the deployment of these models in practical application. Most of these techniques are designed for a particular application environment and adopt public key cryptography which incurs high cost due to computation complexity.
To address these issues this work present an secure and efficient privacy preserving of mining data on public cloud platform by adopting party and key based authentication strategy. The proposed SCPPDM (Secure Cloud Privacy Preserving Data Mining) is deployed on Microsoft azure cloud platform. Experiment is conducted to evaluate computation complexity. The outcome shows the proposed model achieves significant performance interm of computation overhead and cost.
A web application detecting dos attack using mca and tameSAT Journals
Abstract
Interconnected systems, such as all kind of servers including web servers, are been always under the threats of network attackers. There are many popular attacks like man in middle attack, cross site scripting, spamming etc. but Denial of service attack is considered to be one of most dangerous attack on the networked applications. The attack causes many serious issues on these computing systems A denial-of-service (DoS) attack is an attempt to make a machine or network resource unavailable to the intended users. The performance of the server is reduced by the DoS attack, so, to increase the efficiency of the server, detection of the attack is necessary. Hence Multivariate Correlation Analysis’ issued, this approach employs triangle area for extracting the correlation information between network traffic. Our implemented system is evaluated using KDD Cup 99 data set, and the treatment of both non-normalized data and normalized data on the performance of the proposed detection system are examined. The implemented system has capability of learning new patterns of legitimate network traffic hence it detect both known and unknown types of DoS attacks and we can say that It is working on the principle of anomaly based attack detection. Triangle-area-based technique is used to speed up the process. The stored legitimate profiles has to keep secured so Detection e=mechanism for the SQL injection is also implemented in the system. The system designed to carry out attack detection is a question-answer portal i.e. a web application and hence the system is using HTTP protocol unlike previous systems which were using TCP. Keywords: Denial-of-Service attack, Features Normalization, Triangle Area Map(TAM), Multivariate Correlation Analysis(MCA), anomaly based detection, SQL injection, HTTP, and TCP,
A LOG-BASED TRACE AND REPLAY TOOL INTEGRATING SOFTWARE AND INFRASTRUCTUREijseajournal
We propose a log-based analysis tool for evaluating web application computer system. A feature of the tool is an integration software log with infrastructure log. Software engineers alone can resolve system faults in the tool, even if the faults are complicated by both software problems and infrastructure problems. The tool consists of 5 steps: preparation software, preparation infrastructure, collecting logs, replaying the log data, and tracing the log data. The tool was applied to a simple web application system in a small-scale
local area network. We confirmed usefulness of the tool when a software engineer detects faults of the system failures such as “404” and “no response” errors. In addition, the tool was partially applied to a real large-scale computer system with many web applications and large network environment. Using the replaying and the tracing in the tool, we found causes of a real authentication error. The causes were combined an infrastructure problem with a software problem. Even if the failure is caused by not only a software problem but also an infrastructure problem, we confirmed that software engineers distinguish between a software problem and an infrastructure problem using the tool.
Genetic fuzzy process metric measurement system for an operating systemijcseit
Operating system (Os) is the most essential software of the computer system,deprived ofit, the computer
system is totally useless. It is the frontier for assessing relevant computer resources. It performance greatly
enhances user overall objective across the system. Related literatures have try in different methods and
techniques to measure the process matric performance of the operating system but none has incorporated
the use of genetic algorithm and fuzzy logic in their varied techniques which indeed is a novel approach.
Extending the work of Michalis, this research focuses on measuring the process matrix performance of an
operating system utilizing set of operating system criteria’s while fusing fuzzy logic to handle
impreciseness and genetic for process optimization.
AN INVESTIGATION OF THE MONITORING ACTIVITY IN SELF ADAPTIVE SYSTEMSijseajournal
Runtime monitoring is essential for the violation detection during the underlying software system execution. In this paper, an investigation of the monitoring activity of MAPE-K control loop is performed which aims at exploring:(1) the architecture of the monitoring activity in terms of the involved components
and control and data flow between them; (2) the standard interface of the monitoring component with other MAPE-K components; (3) the adaptive monitoring and its importance to the monitoring overhead issue; and (4) the monitoring mode and its relevance to some specific situations and systems. This paper also presented a Java framework for the monitoring process for self adaptive systems.
The adoption of cloud environment for various application uses has led to security and privacy concern of user’s data. To protect user data and privacy on such platform is an area of concern.
Many cryptography strategy has been presented to provide secure sharing of resource on cloud platform. These methods tries to achieve a secure authentication strategy to realize feature such as self-blindable access tickets, group signatures, anonymous access tickets, minimal disclosure of tickets and revocation but each one varies in realization of these features. Each feature requires different cryptography mechanism for realization. Due to this it induces computation complexity which affects the deployment of these models in practical application. Most of these techniques are designed for a particular application environment and adopt public key cryptography which incurs high cost due to computation complexity.
To address these issues this work present an secure and efficient privacy preserving of mining data on public cloud platform by adopting party and key based authentication strategy. The proposed SCPPDM (Secure Cloud Privacy Preserving Data Mining) is deployed on Microsoft azure cloud platform. Experiment is conducted to evaluate computation complexity. The outcome shows the proposed model achieves significant performance interm of computation overhead and cost.
A web application detecting dos attack using mca and tameSAT Journals
Abstract
Interconnected systems, such as all kind of servers including web servers, are been always under the threats of network attackers. There are many popular attacks like man in middle attack, cross site scripting, spamming etc. but Denial of service attack is considered to be one of most dangerous attack on the networked applications. The attack causes many serious issues on these computing systems A denial-of-service (DoS) attack is an attempt to make a machine or network resource unavailable to the intended users. The performance of the server is reduced by the DoS attack, so, to increase the efficiency of the server, detection of the attack is necessary. Hence Multivariate Correlation Analysis’ issued, this approach employs triangle area for extracting the correlation information between network traffic. Our implemented system is evaluated using KDD Cup 99 data set, and the treatment of both non-normalized data and normalized data on the performance of the proposed detection system are examined. The implemented system has capability of learning new patterns of legitimate network traffic hence it detect both known and unknown types of DoS attacks and we can say that It is working on the principle of anomaly based attack detection. Triangle-area-based technique is used to speed up the process. The stored legitimate profiles has to keep secured so Detection e=mechanism for the SQL injection is also implemented in the system. The system designed to carry out attack detection is a question-answer portal i.e. a web application and hence the system is using HTTP protocol unlike previous systems which were using TCP. Keywords: Denial-of-Service attack, Features Normalization, Triangle Area Map(TAM), Multivariate Correlation Analysis(MCA), anomaly based detection, SQL injection, HTTP, and TCP,
An analysis of software aging in cloud environment IJECEIAES
Cloud computing is the environment in which several virtual machines (VM) run concurrently on physical machines. The cloud computing infrastructure hosts multiple cloud services that communicate with each other using the interfaces. During operation, the software systems accumulate errors or garbage that leads to system failure and other hazardous consequences. This status is called software aging. Software aging happens because of memory fragmentation, resource consumption in large scale and accumulation of numerical error. Software aging degrads the performance that may result in system failure. This happens because of premature resource exhaustion. The errors that cause software agings are of special types and target the response time and its environment. This issue is to be resolved only during run time as it occurs because of the dynamic nature of the problem. To alleviate the impact of software aging, software rejuvenation technique is being used. Rejuvenation process reboots the system or reinitiates the softwares. Software rejuvenation removes accumulated error conditions, frees up deadlocks and defragments operating system resources like memory. Software aging and rejuvenation has generated a lot of research interest recently. This work reviews some of the research works related to detection of software aging and identifies research gaps.
Proactive cloud service assurance framework for fault remediation in cloud en...IJECEIAES
Cloud resiliency is an important issue in successful implementation of cloud computing systems. Handling cloud faults proactively, with a suitable remediation technique having minimum cost is an important requirement for a fault management system. The selection of best applicable remediation technique is a decision making problem and considers parameters such as i) Impact of remediation technique ii) Overhead of remediation technique ii) Severity of fault and iv) Priority of the application. This manuscript proposes an analytical model to measure the effectiveness of a remediation technique for various categories of faults, further it demonstrates the implementation of an efficient fault remediation system using a rulebased expert system. The expert system is designed to compute a utility value for each remediation technique in a novel way and select the best remediation technique from its knowledgebase. A prototype is developed for experimentation purpose and the results shows improved availability with less overhead as compared to a reactive fault management system.
In a Power plant with a Distributed Control System ( DCS ), process parameters are continuously stored in
databases at discrete intervals. The data contained in these databases may not appear to contain valuable
relational information but practically such a relation exists. The large number of process parameter values
are changing with time in a Power Plant. These parameters are part of rules framed by domain experts for
the expert system. With the changes in parameters there is a quite high possibility to form new rules using
the dynamics of the process itself. We present an efficient algorithm that generates all significant rules
based on the real data. The association based algorithms were compared and the best suited algorithm for
this process application was selected. The application for the Learning system is studied in a Power Plant
domain. The SCADA interface was developed to acquire online plant data.
Software CrashLocator: Locating the Faulty Functions by Analyzing the Crash S...INFOGAIN PUBLICATION
In recent years, studies have been dedicated mainly in the analysis, of crashes in real-world related to large-scale software systems. A crash in terms of computing can be termed as a computer program such as a software application that stops functioning properly. Software crash is a serious problem in production environment. When crash happens, the crash report with the stack trace of software at time of crash is sent to the developer team. Software development team may receive hundreds of stack traces from all deployment sites and many stack traces may be due to same problem. If the developer starts analyzing each trace, it may take a longer duration of time and redundancy many happen in terms of two developers fixing the same problem. This motivates us to present the solution to analyze the stack traces and find the important functions responsible for crash and rank them, so that development resources can be optimized. In this paper we have proposed the solution to solve the problem by developing Software CrashLocator.
A Review on Software Fault Detection and Prevention Mechanism in Software Dev...iosrjce
IOSR Journal of Computer Engineering (IOSR-JCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of computer engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in computer technology. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
LusRegTes: A Regression Testing Tool for Lustre Programs IJECEIAES
Lustre is a synchronous data-flow declarative language widely used for safety-critical applications (avionics, energy, transport...). In such applications, the testing activity for detecting errors of the system plays a crucial role. During the development and maintenance processes, Lustre programs are often evolving, so regression testing should be performed to detect bugs. In this paper, we present a tool for automatic regression testing of Lustre programs. We have defined an approach to generate test cases in regression testing of Lustre programs. In this approach, a Lustre program is represented by an operator network, then the set of paths is identified and the path activation conditions are symbolically computed for each version. Regression test cases are generated by comparing paths between versions. The approach was implemented in a tool, called LusRegTes, in order to automate the test process for Lustre programs.
A Runtime Evaluation Methodology and Framework for Autonomic SystemsIDES Editor
An autonomic system provides self-adaptive ability
that enables system to dynamically adjust its behavior on
environmental changes or system failure. Fundamental
process of adaptive behavior in an autonomic system is consist
of monitoring system or/and environment information,
analyzing monitored information, planning adaptation policy
and executing selected policy. Evaluating system utility is
one of a significant part among them. We propose a novel
approach on evaluating autonomic system at runtime. Our
proposed method takes advantage of a goal model that has
been widely used at requirement elicitation phase to capture
system requirements. We suggest the state-based goal model
that is dynamically activated as the system state changes. In
addition, we defined type of constraints that can be used to
evaluate goal satisfaction level. We implemented a prototype
of autonomic computing software engine to verity our proposed
method. We simulated the behavior of the autonomic
computing engine with the home surveillance robot scenario
and observed the validity of our proposed method
Program aging is a degradation of performance or functionality caused by resource depletion. The aging affects the cloud
services which provide access to big data bank and computing resources. This suffers large budget and delays of defect removal, which
requires other related solutions including renewal in the form of controlled restarts. Collection of various runtime metrics are more
significant source for further study of detection and analysis of aging issues. This study highlights the method for detecting aging
immediately after their introduction by runtime comparisons of different development scenarios. The study focuses on aging of
program and service crash as a consequence.
Bug triage means to transfer a new bug to expertise developer. The manual bug triage is opulent in time
and poor in accuracy, there is a need to automatize the bug triage process. In order to automate the bug triage
process, text classification techniques are applied using stopword removal and stemming. In our proposed work
we have used NB-Classifiers to predict the developers. The data reduction techniques like instance selection
and keyword selection are used to obtain bug report and words. This will help the system to predict only those
developers who are expertise in solving the assigned bug. We will also provide the change of status of bug
report i.e. if the bug is solved then the bug report will be updated. If a particular developer fails to solve the bug
then the bug will go back to another developer.
An analysis of software aging in cloud environment IJECEIAES
Cloud computing is the environment in which several virtual machines (VM) run concurrently on physical machines. The cloud computing infrastructure hosts multiple cloud services that communicate with each other using the interfaces. During operation, the software systems accumulate errors or garbage that leads to system failure and other hazardous consequences. This status is called software aging. Software aging happens because of memory fragmentation, resource consumption in large scale and accumulation of numerical error. Software aging degrads the performance that may result in system failure. This happens because of premature resource exhaustion. The errors that cause software agings are of special types and target the response time and its environment. This issue is to be resolved only during run time as it occurs because of the dynamic nature of the problem. To alleviate the impact of software aging, software rejuvenation technique is being used. Rejuvenation process reboots the system or reinitiates the softwares. Software rejuvenation removes accumulated error conditions, frees up deadlocks and defragments operating system resources like memory. Software aging and rejuvenation has generated a lot of research interest recently. This work reviews some of the research works related to detection of software aging and identifies research gaps.
Proactive cloud service assurance framework for fault remediation in cloud en...IJECEIAES
Cloud resiliency is an important issue in successful implementation of cloud computing systems. Handling cloud faults proactively, with a suitable remediation technique having minimum cost is an important requirement for a fault management system. The selection of best applicable remediation technique is a decision making problem and considers parameters such as i) Impact of remediation technique ii) Overhead of remediation technique ii) Severity of fault and iv) Priority of the application. This manuscript proposes an analytical model to measure the effectiveness of a remediation technique for various categories of faults, further it demonstrates the implementation of an efficient fault remediation system using a rulebased expert system. The expert system is designed to compute a utility value for each remediation technique in a novel way and select the best remediation technique from its knowledgebase. A prototype is developed for experimentation purpose and the results shows improved availability with less overhead as compared to a reactive fault management system.
In a Power plant with a Distributed Control System ( DCS ), process parameters are continuously stored in
databases at discrete intervals. The data contained in these databases may not appear to contain valuable
relational information but practically such a relation exists. The large number of process parameter values
are changing with time in a Power Plant. These parameters are part of rules framed by domain experts for
the expert system. With the changes in parameters there is a quite high possibility to form new rules using
the dynamics of the process itself. We present an efficient algorithm that generates all significant rules
based on the real data. The association based algorithms were compared and the best suited algorithm for
this process application was selected. The application for the Learning system is studied in a Power Plant
domain. The SCADA interface was developed to acquire online plant data.
Software CrashLocator: Locating the Faulty Functions by Analyzing the Crash S...INFOGAIN PUBLICATION
In recent years, studies have been dedicated mainly in the analysis, of crashes in real-world related to large-scale software systems. A crash in terms of computing can be termed as a computer program such as a software application that stops functioning properly. Software crash is a serious problem in production environment. When crash happens, the crash report with the stack trace of software at time of crash is sent to the developer team. Software development team may receive hundreds of stack traces from all deployment sites and many stack traces may be due to same problem. If the developer starts analyzing each trace, it may take a longer duration of time and redundancy many happen in terms of two developers fixing the same problem. This motivates us to present the solution to analyze the stack traces and find the important functions responsible for crash and rank them, so that development resources can be optimized. In this paper we have proposed the solution to solve the problem by developing Software CrashLocator.
A Review on Software Fault Detection and Prevention Mechanism in Software Dev...iosrjce
IOSR Journal of Computer Engineering (IOSR-JCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of computer engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in computer technology. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
LusRegTes: A Regression Testing Tool for Lustre Programs IJECEIAES
Lustre is a synchronous data-flow declarative language widely used for safety-critical applications (avionics, energy, transport...). In such applications, the testing activity for detecting errors of the system plays a crucial role. During the development and maintenance processes, Lustre programs are often evolving, so regression testing should be performed to detect bugs. In this paper, we present a tool for automatic regression testing of Lustre programs. We have defined an approach to generate test cases in regression testing of Lustre programs. In this approach, a Lustre program is represented by an operator network, then the set of paths is identified and the path activation conditions are symbolically computed for each version. Regression test cases are generated by comparing paths between versions. The approach was implemented in a tool, called LusRegTes, in order to automate the test process for Lustre programs.
A Runtime Evaluation Methodology and Framework for Autonomic SystemsIDES Editor
An autonomic system provides self-adaptive ability
that enables system to dynamically adjust its behavior on
environmental changes or system failure. Fundamental
process of adaptive behavior in an autonomic system is consist
of monitoring system or/and environment information,
analyzing monitored information, planning adaptation policy
and executing selected policy. Evaluating system utility is
one of a significant part among them. We propose a novel
approach on evaluating autonomic system at runtime. Our
proposed method takes advantage of a goal model that has
been widely used at requirement elicitation phase to capture
system requirements. We suggest the state-based goal model
that is dynamically activated as the system state changes. In
addition, we defined type of constraints that can be used to
evaluate goal satisfaction level. We implemented a prototype
of autonomic computing software engine to verity our proposed
method. We simulated the behavior of the autonomic
computing engine with the home surveillance robot scenario
and observed the validity of our proposed method
Program aging is a degradation of performance or functionality caused by resource depletion. The aging affects the cloud
services which provide access to big data bank and computing resources. This suffers large budget and delays of defect removal, which
requires other related solutions including renewal in the form of controlled restarts. Collection of various runtime metrics are more
significant source for further study of detection and analysis of aging issues. This study highlights the method for detecting aging
immediately after their introduction by runtime comparisons of different development scenarios. The study focuses on aging of
program and service crash as a consequence.
Bug triage means to transfer a new bug to expertise developer. The manual bug triage is opulent in time
and poor in accuracy, there is a need to automatize the bug triage process. In order to automate the bug triage
process, text classification techniques are applied using stopword removal and stemming. In our proposed work
we have used NB-Classifiers to predict the developers. The data reduction techniques like instance selection
and keyword selection are used to obtain bug report and words. This will help the system to predict only those
developers who are expertise in solving the assigned bug. We will also provide the change of status of bug
report i.e. if the bug is solved then the bug report will be updated. If a particular developer fails to solve the bug
then the bug will go back to another developer.
USING CATEGORICAL FEATURES IN MINING BUG TRACKING SYSTEMS TO ASSIGN BUG REPORTSijseajournal
Most bug assignment approaches utilize text classification and information retrieval techniques. These
approaches use the textual contents of bug reports to build recommendation models. The textual contents of
bug reports are usually of high dimension and noisy source of information. These approaches suffer from
low accuracy and high computational needs. In this paper, we investigate whether using categorical fields
of bug reports, such as component to which the bug belongs, are appropriate to represent bug reports
instead of textual description. We build a classification model by utilizing the categorical features, as a
representation, for the bug report. The experimental evaluation is conducted using three projects namely
NetBeans, Freedesktop, and Firefox. We compared this approach with two machine learning based bug
assignment approaches. The evaluation shows that using the textual contents of bug reports is important. In
addition, it shows that the categorical features can improve the classification accuracy
With the emergence of virtualization and cloud computing technologies, several services are housed on virtualization platform. Virtualization is the technology that many cloud service providers rely on for efficient management and coordination of the resource pool. As essential services are also housed on cloud platform, it is necessary to ensure continuous availability by implementing all necessary measures. Windows Active Directory is one such service that Microsoft developed for Windows domain networks. It is included in Windows Server operating systems as a set of processes and services for authentication and authorization of users and computers in a Windows domain type network. The service is required to run continuously without downtime. As a result, there are chances of accumulation of errors or garbage leading to software aging which in turn may lead to system failure and associated consequences. This results in software aging. In this work, software aging patterns of Windows active directory service is studied. Software aging of active directory needs to be predicted properly so that rejuvenation can be triggered to ensure continuous service delivery. In order to predict the accurate time, a model that uses time series forecasting technique is built.
There are many ways to ruin a performance testing project, there is just a handful of ways to do it right. This publication analyses the most widespread performance testing blunders. It is impossible in one article to expose all the varieties of testing wrongdoings; as such, this publication is definitely an open-ended.
GENETIC-FUZZY PROCESS METRIC MEASUREMENT SYSTEM FOR AN OPERATING SYSTEMijcseit
Operating system (Os) is the most essential software of the computer system,deprived ofit, the computer
system is totally useless. It is the frontier for assessing relevant computer resources. It performance greatly
enhances user overall objective across the system. Related literatures have try in different methods and
techniques to measure the process matric performance of the operating system but none has incorporated
the use of genetic algorithm and fuzzy logic in their varied techniques which indeed is a novel approach.
Extending the work of Michalis, this research focuses on measuring the process matrix performance of an
operating system utilizing set of operating system criteria’s while fusing fuzzy logic to handle
impreciseness and genetic for process optimization.
GENETIC-FUZZY PROCESS METRIC MEASUREMENT SYSTEM FOR AN OPERATING SYSTEMijcseit
Operating system (Os) is the most essential software of the computer system,deprived ofit, the computer system is totally useless. It is the frontier for assessing relevant computer resources. It performance greatly
enhances user overall objective across the system. Related literatures have try in different methods and techniques to measure the process matric performance of the operating system but none has incorporated the use of genetic algorithm and fuzzy logic in their varied techniques which indeed is a novel approach. Extending the work of Michalis, this research focuses on measuring the process matrix performance of an
operating system utilizing set of operating system criteria’s while fusing fuzzy logic to handle impreciseness and genetic for process optimization.
A NEW DATA ENCODER AND DECODER SCHEME FOR NETWORK ON CHIPEditor IJMTER
System-on-chip (soc) based system has so many disadvantages in power-dissipation as
well as clock rate while the data transfer from one system to another system in on-chip. At the same
time, a higher operated system does not support the lower operated bus network for data transfer.
However an alternative scheme is proposed for high speed data transfer. But this scheme is limited to
SOCs. Unlike soc, network-on-chip (NOC) has so many advantages for data transfer. It has a special
feature to transfer the data in on-chip named as transitional encoder. Its operation is based on input
transitions. At the same time it supports systems which are higher operated frequencies. In this
project, a low-power encoding scheme is proposed. The proposed system yields lower dynamic
power dissipation due to the reduction of switching activity and coupling switching activity when
compared to existing system. Even-though many factors which is based on power dissipation, the
dynamic power dissipation is only considerable for reasonable advantage. The proposed system is
synthesized using quartus II 9.1 software. Besides, the proposed system will be extended up to
interlink PE communication with help of routers and PE’s which are performed by various
operations. To implement this system in real NOC’s contains the proposed encoders and decoders for
data transfer with regular traffic scenarios should be considered.
A RESEARCH - DEVELOP AN EFFICIENT ALGORITHM TO RECOGNIZE, SEPARATE AND COUNT ...Editor IJMTER
Coins are important part of our life. We use coins in a places like stores, banks, buses, trains
etc. So it becomes a basic need that coins can be sorted, counted automatically. For this, there is
necessary that the coins can be recognized automatically. Automated Coin Recognition System for the
Indian Coins of Rs. 1, 2, 5 and 10 with the rotation invariance. We have taken images from the both
sides of coin. So this system is capable to recognizing coins from both sides. Features are taken from the
images using techniques as a Hough Transformation, Pattern Averaging etc.
Analysis of VoIP Traffic in WiMAX EnvironmentEditor IJMTER
Worldwide Interoperability for Microwave Access (WiMAX) is currently one of the
hottest technologies in wireless communication. It is a standard based on the IEEE 802.16 wireless
technology that provides a very high throughput broadband connections over long distances. In
parallel, Voice Over Internet Protocol (VoIP) is a new technology which provides access to voice
communication over internet protocol and hence it is becomes an alternative to public switched
telephone networks (PSTN) due to its capability of transmission of voice as packets over IP
networks. A lot of research has been done in analyzing the performances of VoIP traffic over
WiMAX network. In this paper we review the analysis carried out by several authors for the most
common VoIP codec’s which are G.711, G.723.1 and G.729 over a WiMAX network using various
service classes. The objective is to compare the results for different types of service classes with
respect to the QoS parameters such as throughput, average delay and average jitter.
A Hybrid Cloud Approach for Secure Authorized De-DuplicationEditor IJMTER
The cloud backup is used for the personal storage of the people in terms of reducing the
mainlining process and managing the structure and storage space managing process. The challenging
process is the deduplication process in both the local and global backup de-duplications. In the prior
work they only provide the local storage de-duplication or vice versa global storage de-duplication in
terms of improving the storage capacity and the processing time. In this paper, the proposed system
is called as the ALG- Dedupe. It means the Application aware Local-Global Source De-duplication
proposed system to provide the efficient de-duplication process. It can provide the efficient deduplication process with the low system load, shortened backup window, and increased power
efficiency in the user’s personal storage. In the proposed system the large data is partitioned into
smaller part which is called as chunks of data. Here the data may contain the redundancy it will be
avoided before storing into the storage area.
Aging protocols that could incapacitate the InternetEditor IJMTER
The biggest threat to the Internet is the fact that it was never really designed. For e.g., the
BGP protocol is used by Internet routers to exchange information about changes to the Internet's
network topologies. However, it also is among the most fundamentally broken; as Internet routing
information can be poisoned with bogus routing information. Instead, it evolved in fits and start,
thanks to various protocols that have been cobbled together to fulfill the needs of the moment. Few
of protocols from them were designed with security in mind. or if they were sported no more than
was needed to keep out a nosy neighbor, not a malicious attacker. The result is a welter of aging
protocols susceptible to exploit on an Internet scale. Here are six Internet protocols that could stand
to be replaced sooner rather than later or are (mercifully) on the way out.
A Cloud Computing design with Wireless Sensor Networks For Agricultural Appli...Editor IJMTER
The emergence of exactitude agriculture has been promoted by the numerous developments within
the field of wireless sensing element actor networks (WSAN). These WSANs offer important data
for gathering, work management, development of crops, and limitation of crop diseases. Goals of
this paper to introducing cloud computing as a brand new way (technique) to be utilized in addition
to WSANs to any enhance their application and benefits to the area of agriculture.
A CAR POOLING MODEL WITH CMGV AND CMGNV STOCHASTIC VEHICLE TRAVEL TIMESEditor IJMTER
Carpooling (also car-sharing, ride-sharing, lift-sharing), is the sharing of car journeys so
that more than one person travels in a car. It helps to resolve a variety of problems that continue to
plague urban areas, ranging from energy demands and traffic congestion to environmental pollution.
Most of the existing method used stochastic disturbances arising from variations in vehicle travel
times for carpooling. However it doesn’t deal with the unmet demand with uncertain demand of the
vehicle for car pooling. To deal with this the proposed system uses Chance constrained
formulation/Programming (CCP) approach of the problem with stochastic demand and travel time
parameters, under mild assumptions on the distribution of stochastic parameters; and relates it with a
robust optimization approach. Since real problem sizes can be large, it could be difficult to find
optimal solutions within a reasonable period of time. Therefore solution algorithm using tabu
heuristic solution approach is developed to solve the model. Therefore, we constructed a stochastic
carpooling model that considers the in- fluence of stochastic travel times. The model is formulated as
an integer multiple commodity network flow problem. Since real problem sizes can be large, it could
be difficult to find optimal solutions within a reasonable period of time.
Sustainable Construction With Foam Concrete As A Green Green Building MaterialEditor IJMTER
A green building is an environmentally sustainable building, designed, constructed and
operated to minimise the total environmental impacts. Carbon dioxide (CO2) is the primary
greenhouse gas emitted through human activities. It is claimed that 5% of the world’s carbon dioxide
emission is attributed to cement industry, which is the vital constituent of concrete. Due to the
significant contribution to the environmental pollution, there is a need for finding an optimal solution
along with satisfying the civil construction needs. Apart from normal concrete bricks, a clay brick,
Foam concrete is a new innovative technology for sustainable building and civil construction which
fulfills the criteria of being a Green Material. This paper concludes that Foam Concrete can be an
effective sustainable material for construction and also focuses on the cost effectiveness in using
Foam Concrete as a building material in replacement with Clay Brick or other bricks.
USE OF ICT IN EDUCATION ONLINE COMPUTER BASED TESTEditor IJMTER
A good education system is required for overall prosperity of a nation. A tremendous
growth in the education sector had made the administration of education institutions complex. Any
researches reveal that the integration of ICT helps to reduce the complexity and enhance the overall
administration of education. This study has been undertaken to identify the various functional areas
to which ICT is deployed for information administration in education institutions and to find the
current extent of usage of ICT in all these functional areas pertaining to information administration.
The various factors that contribute to these functional areas were identified. A theoretical model was
derived and validated.
Textual Data Partitioning with Relationship and Discriminative AnalysisEditor IJMTER
Data partitioning methods are used to partition the data values with similarity. Similarity
measures are used to estimate transaction relationships. Hierarchical clustering model produces tree
structured results. Partitioned clustering produces results in grid format. Text documents are
unstructured data values with high dimensional attributes. Document clustering group ups unlabeled text
documents into meaningful clusters. Traditional clustering methods require cluster count (K) for the
document grouping process. Clustering accuracy degrades drastically with reference to the unsuitable
cluster count.
Textual data elements are divided into two types’ discriminative words and nondiscriminative
words. Only discriminative words are useful for grouping documents. The involvement of
nondiscriminative words confuses the clustering process and leads to poor clustering solution in return.
A variation inference algorithm is used to infer the document collection structure and partition of
document words at the same time. Dirichlet Process Mixture (DPM) model is used to partition
documents. DPM clustering model uses both the data likelihood and the clustering property of the
Dirichlet Process (DP). Dirichlet Process Mixture Model for Feature Partition (DPMFP) is used to
discover the latent cluster structure based on the DPM model. DPMFP clustering is performed without
requiring the number of clusters as input.
Document labels are used to estimate the discriminative word identification process. Concept
relationships are analyzed with Ontology support. Semantic weight model is used for the document
similarity analysis. The system improves the scalability with the support of labels and concept relations
for dimensionality reduction process.
Testing of Matrices Multiplication Methods on Different ProcessorsEditor IJMTER
There are many algorithms we found for matrices multiplication. Until now it has been
found that complexity of matrix multiplication is O(n3). Though Further research found that this
complexity can be decreased. This paper focus on the algorithm and its complexity of matrices
multiplication methods.
Malware is a worldwide pandemic. It is designed to damage computer systems without
the knowledge of the owner using the system. Software‟s from reputable vendors also contain
malicious code that affects the system or leaks information‟s to remote servers. Malware‟s includes
computer viruses, spyware, dishonest ad-ware, rootkits, Trojans, dialers etc. Malware detectors are
the primary tools in defense against malware. The quality of such a detector is determined by the
techniques it uses. It is therefore imperative that we study malware detection techniques and
understand their strengths and limitations. This survey examines different types of Malware and
malware detection methods.
SURVEY OF TRUST BASED BLUETOOTH AUTHENTICATION FOR MOBILE DEVICEEditor IJMTER
Practical requirements for securely demonstrating identities between two handheld
devices are an important concern. The adversary can inject a Man-In- The-Middle (MITM) attack to
intrude the protocol. Protocols that employ secret keys require the devices to share private
information in advance, in which it is not feasible in the above scenario. Apart from insecurely
typing passwords into handheld devices or comparing long hexadecimal keys displayed on the
devices’ screen, many other human-verifiable protocols have been proposed in the literature to solve
the problem. Unfortunately, most of these schemes are unsalable to more users. Even when there are
only three entities attempt to agree a session key, these protocols need to be rerun for three times.
So, in the existing method a bipartite and a tripartite authentication protocol is presented using a
temporary confidential channel. Besides, further extend the system into a transitive authentication
protocol that allows multiple handheld devices to establish a conference key securely and efficiently.
But this method detects only the outsider attacks. Method does not consider the insider attacks. So,
in the proposed method trust score based method is introduced which computes the trust values for
the nodes and provide the security. The trust score is computed has a positive influence on the
confidence with which an entity conducts transactions with that node. Network the behavior of the
node will be monitored periodically and its trust value is also updated .So depending on the behavior
of the node in the network trust relation will be established between two nodes.
GLAUCOMA is a chronic eye disease that can damage optic nerve. According to WHO It
is the second leading cause of blindness, and is predicted to affect around 80 million people by 2020.
Development of the disease leads to loss of vision, which occurs increasingly over a long period of
time. As the symptoms only occur when the disease is quite advanced so that glaucoma is called the
silent thief of sight. Glaucoma cannot be cured, but its development can be slowed down by
treatment. Therefore, detecting glaucoma in time is critical. However, many glaucoma patients are
unaware of the disease until it has reached its advanced stage. In this paper, some manual and
automatic methods are discussed to detect glaucoma. Manual analysis of the eye is time consuming
and the accuracy of the parameter measurements also varies with different clinicians. To overcome
these problems with manual analysis, the objective of this survey is to introduce a method to
automatically analyze the ultrasound images of the eye. Automatic analysis of this disease is much
more effective than manual analysis.
Survey: Multipath routing for Wireless Sensor NetworkEditor IJMTER
Reliability is playing very vital role in some application of Wireless Sensor Networks
and multipath routing is one of the ways to increase the probability of reliability. More over energy
consumption is constraint. In this paper, we provide a survey of the state-of-the-art of proposed
multipath routing algorithm for Wireless Sensor Networks. We study the design, analyze the tradeoff
of each design, and overview several presenting algorithms.
Step up DC-DC Impedance source network based PMDC Motor DriveEditor IJMTER
This paper is devoted to the Quasi Z source network based DC Drive. The cascaded
(two-stage) Quasi Z Source network could be derived by the adding of one diode, one inductor,
and two capacitors to the traditional quasi-Z-source inverter The proposed cascaded qZSI inherits all
the advantages of the traditional solution (voltage boost and buck functions in a single stage,
continuous input current, and improved reliability). Moreover, as compared to the conventional qZSI,
the proposed solution reduces the shoot-through duty cycle by over 30% at the same voltage boost
factor. Theoretical analysis of the two-stage qZSI in the shoot-through and non-shoot-through
operating modes is described. The proposed and traditional qZSI-networks are compared. A
prototype of a Quasi Z Source network based DC Drive was built to verify the theoretical
assumptions. The experimental results are presented and analyzed.
SPIRITUAL PERSPECTIVE OF AUROBINDO GHOSH’S PHILOSOPHY IN TODAY’S EDUCATIONEditor IJMTER
The paper reflects the spiritual philosophy of Aurobindo Ghosh which is helpful in today’s
education. In 19th century he wrote about spirituality, in accordance with that it is a core and vital part
of today’s education. It is very much essential for today’s kid. Here I propose the overview of that
philosophy.At the utmost regeneration of those values in today’s generation is the great deal with
education system. To develop the values and spiritual education in the youngers is the great moto of
mine. It is the materialistic world and without value redefinition among them is the harder task but not
difficult.
Software Quality Analysis Using Mutation Testing SchemeEditor IJMTER
The software test coverage is used measure the safety measures. The safety critical analysis is
carried out for the source code designed in Java language. Testing provides a primary means for
assuring software in safety-critical systems. To demonstrate, particularly to a certification authority, that
sufficient testing has been performed, it is necessary to achieve the test coverage levels recommended or
mandated by safety standards and industry guidelines. Mutation testing provides an alternative or
complementary method of measuring test sufficiency, but has not been widely adopted in the safetycritical industry. The system provides an empirical evaluation of the application of mutation testing to
airborne software systems which have already satisfied the coverage requirements for certification.
The system mutation testing to safety-critical software developed using high-integrity subsets of
C and Ada, identify the most effective mutant types and analyze the root causes of failures in test cases.
Mutation testing could be effective where traditional structural coverage analysis and manual peer
review have failed. They also show that several testing issues have origins beyond the test activity and
this suggests improvements to the requirements definition and coding process. The system also
examines the relationship between program characteristics and mutation survival and considers how
program size can provide a means for targeting test areas most likely to have dormant faults. Industry
feedback is also provided, particularly on how mutation testing can be integrated into a typical
verification life cycle of airborne software. The system also covers the safety and criticality levels of
Java source code.
Software Defect Prediction Using Local and Global AnalysisEditor IJMTER
The software defect factors are used to measure the quality of the software. The software
effort estimation is used to measure the effort required for the software development process. The defect
factor makes an impact on the software development effort. Software development and cost factors are
also decided with reference to the defect and effort factors. The software defects are predicted with
reference to the module information. Module link information are used in the effort estimation process.
Data mining techniques are used in the software analysis process. Clustering techniques are used
in the property grouping process. Rule mining methods are used to learn rules from clustered data
values. The “WHERE” clustering scheme and “WHICH” rule mining scheme are used in the defect
prediction and effort estimation process. The system uses the module information for the defect
prediction and effort estimation process.
The proposed system is designed to improve the defect prediction and effort estimation process.
The Single Objective Genetic Algorithm (SOGA) is used in the clustering process. The rule learning
operations are carried out sing the Apriori algorithm. The system improves the cluster accuracy levels.
The defect prediction and effort estimation accuracy is also improved by the system. The system is
developed using the Java language and Oracle relation database environment.
Software Cost Estimation Using Clustering and Ranking SchemeEditor IJMTER
Software cost estimation is an important task in the software design and development process.
Planning and budgeting tasks are carried out with reference to the software cost values. A variety of
software properties are used in the cost estimation process. Hardware, products, technology and
methodology factors are used in the cost estimation process. The software cost estimation quality is
measured with reference to the accuracy levels.
Software cost estimation is carried out using three types of techniques. They are regression based
model, anology based model and machine learning model. Each model has a set of technique for the
software cost estimation process. 11 cost estimation techniques fewer than 3 different categories are
used in the system. The Attribute Relational File Format (ARFF) is used maintain the software product
property values. The ARFF file is used as the main input for the system.
The proposed system is designed to perform the clustering and ranking of software cost
estimation methods. Non overlapped clustering technique is enhanced with optimal centroid estimation
mechanism. The system improves the clustering and ranking process accuracy. The system produces
efficient ranking results on software cost estimation methods.
Water scarcity is the lack of fresh water resources to meet the standard water demand. There are two type of water scarcity. One is physical. The other is economic water scarcity.
Saudi Arabia stands as a titan in the global energy landscape, renowned for its abundant oil and gas resources. It's the largest exporter of petroleum and holds some of the world's most significant reserves. Let's delve into the top 10 oil and gas projects shaping Saudi Arabia's energy future in 2024.
Industrial Training at Shahjalal Fertilizer Company Limited (SFCL)MdTanvirMahtab2
This presentation is about the working procedure of Shahjalal Fertilizer Company Limited (SFCL). A Govt. owned Company of Bangladesh Chemical Industries Corporation under Ministry of Industries.
Welcome to WIPAC Monthly the magazine brought to you by the LinkedIn Group Water Industry Process Automation & Control.
In this month's edition, along with this month's industry news to celebrate the 13 years since the group was created we have articles including
A case study of the used of Advanced Process Control at the Wastewater Treatment works at Lleida in Spain
A look back on an article on smart wastewater networks in order to see how the industry has measured up in the interim around the adoption of Digital Transformation in the Water Industry.
About
Indigenized remote control interface card suitable for MAFI system CCR equipment. Compatible for IDM8000 CCR. Backplane mounted serial and TCP/Ethernet communication module for CCR remote access. IDM 8000 CCR remote control on serial and TCP protocol.
• Remote control: Parallel or serial interface.
• Compatible with MAFI CCR system.
• Compatible with IDM8000 CCR.
• Compatible with Backplane mount serial communication.
• Compatible with commercial and Defence aviation CCR system.
• Remote control system for accessing CCR and allied system over serial or TCP.
• Indigenized local Support/presence in India.
• Easy in configuration using DIP switches.
Technical Specifications
Indigenized remote control interface card suitable for MAFI system CCR equipment. Compatible for IDM8000 CCR. Backplane mounted serial and TCP/Ethernet communication module for CCR remote access. IDM 8000 CCR remote control on serial and TCP protocol.
Key Features
Indigenized remote control interface card suitable for MAFI system CCR equipment. Compatible for IDM8000 CCR. Backplane mounted serial and TCP/Ethernet communication module for CCR remote access. IDM 8000 CCR remote control on serial and TCP protocol.
• Remote control: Parallel or serial interface
• Compatible with MAFI CCR system
• Copatiable with IDM8000 CCR
• Compatible with Backplane mount serial communication.
• Compatible with commercial and Defence aviation CCR system.
• Remote control system for accessing CCR and allied system over serial or TCP.
• Indigenized local Support/presence in India.
Application
• Remote control: Parallel or serial interface.
• Compatible with MAFI CCR system.
• Compatible with IDM8000 CCR.
• Compatible with Backplane mount serial communication.
• Compatible with commercial and Defence aviation CCR system.
• Remote control system for accessing CCR and allied system over serial or TCP.
• Indigenized local Support/presence in India.
• Easy in configuration using DIP switches.
Hierarchical Digital Twin of a Naval Power SystemKerry Sado
A hierarchical digital twin of a Naval DC power system has been developed and experimentally verified. Similar to other state-of-the-art digital twins, this technology creates a digital replica of the physical system executed in real-time or faster, which can modify hardware controls. However, its advantage stems from distributing computational efforts by utilizing a hierarchical structure composed of lower-level digital twin blocks and a higher-level system digital twin. Each digital twin block is associated with a physical subsystem of the hardware and communicates with a singular system digital twin, which creates a system-level response. By extracting information from each level of the hierarchy, power system controls of the hardware were reconfigured autonomously. This hierarchical digital twin development offers several advantages over other digital twins, particularly in the field of naval power systems. The hierarchical structure allows for greater computational efficiency and scalability while the ability to autonomously reconfigure hardware controls offers increased flexibility and responsiveness. The hierarchical decomposition and models utilized were well aligned with the physical twin, as indicated by the maximum deviations between the developed digital twin hierarchy and the hardware.
Association Rule Mining Scheme for Software Failure Analysis
1. Scientific Journal Impact Factor (SJIF): 1.711
International Journal of Modern Trends in Engineering
and Research
www.ijmter.com
@IJMTER-2014, All rights Reserved 468
e-ISSN: 2349-9745
p-ISSN: 2393-8161
Association Rule Mining Scheme for Software Failure Analysis
Ms. R. Gowri1
, Dr. M. Latha2
, Mr. R. Subramanian3
1
Research Scholar
2
M.Sc., M.Phil., Ph.D, Associate Professor, Department of Computer Science
1,2
Sri Sarada College for Women, Salem, Tamilnadu, India
3
Erode Arts and Science College, Erode
Abstract-The software execution process is tracked with event logs. The event logs are used to maintain the
execution process flow in a textual log file. The log file also manages the error values and their source of classes.
The error values are used to analyze the failure of the software. The data mining methods are used to evaluate the
quality and software failure rate analysis process. The text logs are processed and data values are extracted from
the data values. The data values are mined using the machine learning methods for failure analysis.
The service error, service complaints, interaction error and crash errors are maintained under the log files.
The events and their reactions are also maintained under the log files. Software termination and execution failures
are identified using the log details. The log file parsing process is applied to extract data from the logs. The
associations rule mining methods are used to analyze the log files for failure detection process. The system uses
the Weighted Association Rule Mining (WARM) scheme to fetch failure rate in the software execution flow. The
system improves the failure rate detection accuracy in WARM model.
I. INTRODUCTION
Software fault tolerance is the ability of computer software to continue its normal operation despite the
presence of system or hardware faults. The only thing constant is change. This is certainly more true of software
systems than almost any phenomenon, not all software change in the same way so software fault tolerance
methods are designed to overcome execution errors by modifying variable values to create an acceptable program
state. The need to control software fault is one of the most rising challenges facing software industries today. It is
obvious that fault tolerance must be a key consideration in the early stage of software development.
There are different methodologies approaches in developing a well quality software tolerance from
different angles by different researchers but the most widely used methods. N-versioned programming is based
on hardware fault tolerance techniques which compare the output of duplicate hardware module run in parallel in
an attempt to minimized coincidental failure caused by common logical flaws. Recovery blocks is based on
software adaptation of the hardware fault tolerance techniques in which a set of redundant block is created, each
of which create a comparable results. Robust data structure methods is quite suitable compare to the N-versioned
and recovery blocks because it allow user- defined structure such as list and trees, to be detected and corrected.
Periodically a particular detection algorithm examines a structure for errors.
II. RELATED WORK
Logs are human-readable text files that report sequences of text entries ranging from regular to error
events. A typical log entry contains a time stamp, the identifier of the source of the event, a severity and a free
text message. Well-known logging frameworks are UNIX Syslog and Microsoft Event Logging.
2.1. Log-Based Failure Analysis
Over the years, several software packages have emerged to support log-based failure analysis, integrating
state-of-theart techniques to collect, manipulate, and model the log data, e.g., MEADEP, Analyze NOW. Log-
based analysis is not yet supported by fully-automated procedures, leaving most of the processing burden to log
analysts, who often have a limited knowledge of the system. For instance, the authors define a complex algorithm
to identify OS reboots from the log based on the sequential parsing of log messages. Furthermore, since a fault
2. International Journal of Modern Trends in Engineering and Research (IJMTER)
Volume 02, Issue 01, [January - 2015] e-ISSN: 2349-9745, p-ISSN: 2393-8161
@IJMTER-2014, All rights Reserved 469
activation can cause multiple notifications in the log a significant effort has to be spent to coalesce the entries
related to the same fault manifestation [10]. Preprocessing tasks are critical to obtaining accurate failure analysis.
We aim to obtain accurate logs which can be analyzed without needing to perform laborious preprocessing
activities.
Despite efforts on log processing and analysis, logs are known to suffer from incompleteness and
inaccuracy issues. For example, the analysis of a set of networked Windows NT workstations highlighted that 58
percent of reboots remained unclassified because a clear cause of the reboot was not reported in the logs. The
authors show that the built-in detection mechanisms of the JVM are not capable of providing any evidence for
around 45 percent of failures. Even more important, current logs are often ineffective in the case of software
faults, recognized to be among the main causes of system failures. For instance, in the context of web applications
[3] it has been observed that although logs can detect failures caused by resource exhaustion and environment
conditions, they provide limited coverage of software failures caused by concurrency faults, e.g., deadlocks. In
our earlier study [2], we demonstrated that around 60 percent of failures caused by the activation of software
faults go undetected by current logging mechanisms.
2.2. Log Production Schemes
Several solutions have been proposed to address the inefficiencies of logs. For example, IBM Common
Event Infrastructure [6] provides APIs and infrastructure for the creation, persistence, and distribution of log
events. Apache log4x, e.g., [9], is a configurable environment to collect log events. These frameworks are
valuable because they allow saving the time needed to collect, parse, and filter logs; they mainly address log
format issues rather then addressing the problem of designing the logging mechanism.
A more systematic approach to log management is provided by software systems relying on the Aspect-
Oriented Programming (AOP) paradigm, where logging can be treated as a system-wide feature orthogonal to
other services or to the business logic. For instance, through aspect weaving, a log entry can be systematically
produced for each runtime exception, supporting system-wide and application transparent exception reporting. It
is argued that the correct aspect-oriented exception management and logging can lead to more reliable software.
Aspect-oriented logging requires the adoption of an AOP framework, which may not be the case for several
software industries. We aim to conceive a set of rules for log-based failure analysis that can be applied
independently from a particular programming framework.
Other solutions, introduce a set of recommendations to improve the expressiveness of the logs. Similarly,
Yuan et al. [7] propose to enhance the logging code by adding information, e.g., data values, to ease the diagnosis
task in case of failures. These works improve the invocations of logging functions that already exist in the
software platform. Nevertheless, incompleteness of the logs cannot be solved acting solely on the existing
functions: Developers might forget to log significant events, and, in many cases, errors escape existing logging
points. Finally, an approach to visualize console logs is proposed in [8]: The authors describe how to obtain a
graph that can be used to improve the logging mechanisms, e.g., by adding missing statements. Differently from
our objective, the improvement of the logging mechanism aims to enhance the semantic rather than the ability of
the log at detecting errors.
2.3. Methods for Failure Detection
For the above reasons, other techniques are adopted to detect and analyze failures, such as runtime failure
detection, executable assertions, or software tracing. Runtime failure detection consists of observing, either
locally or remotely, the execution state of the system. It can be implemented either by means of query-based
techniques, e.g., sending heartbeat messages, or by exploiting hardware support where available, such as
watchdog timers. Other solutions rely on continuous monitoring of the status of system variables and on the
comparison of these data with traces of normal and anomalous executions. Executable assertions, usually adopted
in the embedded systems domain, are check statements performed on the program variables to detect application-
specific content errors, e.g., invalid variable values for the given function. Software tracing solutions are widely
adopted to monitor the execution of a software system, e.g., to perform Function Boundary Tracing (FBT), which
registers function entry and exit events. They rely on software instrumentation packages, such as DTrace or
Kerninst, usually working at the binary level, and hence with no or limited knowledge on the structure of the
3. International Journal of Modern Trends in Engineering and Research (IJMTER)
Volume 02, Issue 01, [January - 2015] e-ISSN: 2349-9745, p-ISSN: 2393-8161
@IJMTER-2014, All rights Reserved 470
system. Although FBT is typically adopted for performance evaluation or reverse engineering, it can be also
adopted to detect system malfunction.
These diverse techniques are extremely valuable to detect given classes of failures; their aim is not
formalizing the implementation of a comprehensive logging mechanism. The novel aspect of our proposal is to
leverage design artifacts to infer a logging mechanism that supports the analysis of software failures. Design
artifacts allow identifying errors that potentially lead to failures. A set of rules establishes how the logging
mechanism must be implemented to detect such errors, i.e., in terms of instructions to be placed in the source code
of software. The proposed rules partially leverage benefits of mentioned detection techniques, but, more
importantly, formalize the use of such techniques based on an error model that allows achieving effective logs: To
the best of our knowledge, this issue has not been addressed so far.
III. SOFTWARE FAILURES ANALYSIS USING EVENT LOGS
Failure analysis techniques aim to characterize the dependability behavior of operational systems and are
used in many industrial sectors, such as aerospace [1] and automotive. These techniques are valuable to engineers:
For example, they allow us to understand system failure modes, establish the cause of failures, prevent their
occurrence, and improve the dependability of future system releases. Failure analysis is often conducted by
collecting event logs i.e., system-generated files reporting events of interest that occurred during operations. Logs
are used for a variety of purposes, such as debugging, configuration handling, access control, monitoring, and
profiling. Moreover, logs are often the only means to analyze the failure behavior of the system under real
workload conditions.
The level of trust on log-based failure analysis depends on the accuracy of the event log. This is clarified
by Fig. 3.1, which highlights the relationship between the fault-error failure chain and the event log. The errors
ellipse in Fig. 3.1 contains all the activated faults. The subset of errors that reach the system interface is contained
by the failures ellipse. Event logs report a subset of errors. Logs might report errors that do not lead to failures and
only a fraction of actual failures with many failures remaining unreported. These issues represent a serious threat
when logs are used to analyze system failures. Unreported failures may lead to erroneous insights into the
behavior of the system. Similarly, reported errors, although useful when logs are used for other purposes, may be
misinterpreted as false failure indications if not supported by a detailed knowledge of the system. The focus of
this system is on the use of event logs to support accurate failure analysis: To this objective, the event log ellipse
and the failures ellipse should perfectly overlap.
Despite a number of works proposing log filtering and coalescence algorithms [4], [10], the real problem
with failure analysis is the scarce ability of current logs at reporting the right set of error events, i.e., the ones
leading to failures. This is especially true in the case of software faults, which are among the main causes of
system failures. Recent studies have recognized the difficulty in analyzing software failures by looking solely at
logs and in our earlier study [2] we demonstrated that around 60 percent of failures due to software faults do not
leave any trace in logs. The ambition of this work is to fill the gap in the knowledge about the ability of logs to
report software failures and to propose a novel logging approach, named rule-based logging, achieving effective
failure detection. Although several works have addressed the format and representativeness issues of logs, to the
best of our knowledge this is the first contribution addressing the suitability of logs for the analysis of software
failures.
4. International Journal of Modern Trends in Engineering and Research (IJMTER)
Volume 02, Issue 01, [January - 2015] e-ISSN: 2349-9745, p-ISSN: 2393-8161
@IJMTER-2014, All rights Reserved 471
Fig. 3.1. Relationship Between The Fault-Error-Failure Chain And The Event Log
The system provides insights into the deficiencies of current logging mechanisms by analyzing eight
successful open-source and industrial projects, accounting for a total of around 3.5 million lines of code (LOC).
The analysis revealed that traditional logging mechanisms rely on a simplistic coding pattern and lack a
systematic error model. The proposal leverages system design artifacts to define a model encompassing errors
leading to failures. A set of rules establishes how the logging mechanism must be implemented to detect such
errors. The rule-based logging approach is able to detect failures that escape current logging mechanisms and, if
available, to complement the information provided by traditional logs.
The rule-based logging has been applied to two software systems: Apache web server and TAO Open
Data Distribution Service (DDS). The improvement of the detection capability of the log is shown by means of
around 12,500 software fault injection experiments. The most adopted logging pattern aims to detect errors via a
checking code placed at the end of a block of instructions. Rule-based logging significantly improves the failure
detection capability of the log: On average, around 92 percent of failures due to software faults are logged by the
proposed mechanism. Furthermore, it produces few reported errors. Rule-based logging achieves a high
compression rate: On average, the rule-based log is around 150 times smaller than the traditional one. Failures are
notified with few lines, thus easing the interpretation of the log.
IV. PROBLEM STATEMENT
Several software packages have emerged to support log-based failure analysis, integrating state-of-the art
techniques to collect, manipulate, and model the log data, e.g., MEADEP, Analyze NOW and SEC. Log-based
analysis is not yet supported by fully-automated procedures, leaving most of the processing burden to log
analysts, who often have a limited knowledge of the system. For instance, the authors define a complex algorithm
to identify OS reboots from the log based on the sequential parsing of log messages. Furthermore, since fault
activation can cause multiple notifications in the log, a significant effort has to be spent to coalesce the entries
related to the same fault manifestation. Preprocessing tasks are critical to obtaining accurate failure analysis. The
system is aimed to obtain accurate logs which can be analyzed without needing to perform laborious
preprocessing activities.
Despite efforts on log processing and analysis, logs are known to suffer from incompleteness and
inaccuracy issues. For example, the analysis of a set of networked Windows NT workstations highlighted that 58
percent of reboots remained unclassified because a clear cause of the reboot was not reported in the logs. The
authors show that the built-in detection mechanisms of the JVM are not capable of providing any evidence for
around 45 percent of failures. Even more important, current logs are often ineffective in the case of software
faults, recognized to be among the main causes of system failures. For instance, in the context of web applications
it has been observed that although logs can detect failures caused by resource exhaustion and environment
conditions, they provide limited coverage of software failures caused by concurrency faults. 60 percent of failures
caused by the activation of software faults go undetected by current logging mechanisms. The Association Rule
5. International Journal of Modern Trends in Engineering and Research (IJMTER)
Volume 02, Issue 01, [January - 2015] e-ISSN: 2349-9745, p-ISSN: 2393-8161
@IJMTER-2014, All rights Reserved 472
Mining (ARM) produces failure information with low accuracy levels. The following problems are identified
from the current software failure analysis schemes.
• High scalability is not supported.
• Failure detection accuracy is low
• Event summarization is not optimized
•
V. FAILURE DISCOVERY WITH EVENT WEIGHTS
The software failure analysis system uses the event logs generated during the software run time. Event
logs have been widely used over the last three decades to analyze the failure behavior of a variety of systems.
Nevertheless, the implementation of the logging mechanism lacks a systematic approach and collected logs are
often inaccurate at reporting software failures: This is a threat to the validity of log-based failure analysis. The
system analyzes the limitations of current logging mechanisms and proposes a rule-based approach to make logs
effective to analyze software failures. The approach leverages artifacts produced at system design time and puts
forth a set of rules to formalize the placement of the logging instructions within the source code. The Association
Rule Mining (ARM) method is used to estimate the log patterns with failure details. The Weighted Association
Rule Mining (WARM) scheme is used to improve the log analysis for failure detection process.
5.1. Association Rule Mining Process
A number of data mining algorithms have been recently developed that greatly facilitate the processing
and interpreting of large stores of data. One example is the association rule-mining algorithm, which discovers
correlations between items in transactional databases. Priori algorithm is an example of association rule mining
algorithm. Using this algorithm, candidate patterns that receive sufficient support from the database are
considered for transformation into a rule. This type of algorithm works well for complete data with discrete
values.
5.2. Rule-Based Logging
Current logging practices often postpone the insertion of the logging code at the last stages of the
development cycle, according to inefficient patterns and with no knowledge about the system structure.
Differently from current strategies, the proposal leverages system design artifacts to define a systematic error
model supporting the log-based failure analysis of a given system. The implementation of the logging mechanism
is formalized in terms of rules that allow detecting errors based on the model. The approach ata- glance, denoted
as rule-based logging. The key idea is leveraging high- and/or low-level artifacts available at design time to infer
a system representation model. The model identifies a set of entities in the system and interactions among them. A
set of logging rules (LR), inspired by an error model, drives the unambiguous insertion of the logging instructions
within the code of identified entities. The logging rules are conceived to ensure that the data needed to perform
the failure analysis are provided by the event log.
5.3. Weighted Association Rule Mining (WARM) for Log Analysis
The weighted association rule mining (WARM) model is used to estimate the failure relationship with log
event details. The association rule mining model uses the support and confidence for the rule selection process.
The errors are assigned with associated importance based weight values. The weighted support and weighted
confidence values are used to filter the rules from the weighted data values. The system improves the failure rate
detection process with high accuracy levels.
VI. SOFTWARE FAILURE ANALYSIS USING ARM
The software failure analysis based event log system is designed to fetch log failure details. The association rules
mining and weighted association rule mining techniques are used in the system. The system is divided into four
major modules. They are product, class, logs and failures modules. The product module is used to collect product
information from the user. The class details are extracted from the product folder details. The logs module is used
6. International Journal of Modern Trends in Engineering and Research (IJMTER)
Volume 02, Issue 01, [January - 2015] e-ISSN: 2349-9745, p-ISSN: 2393-8161
@IJMTER-2014, All rights Reserved 473
fetch error details from the logs. The failure module is used to apply ARM and WARM techniques for failure
detection process.
6.1. Product Module
The Product module is designed to fetch the product name and path. The classes and their size details are
displayed in the product details window. The system also displays the product description in a separate window.
The user can select and close the projects dynamically.
6.2. Class Module
This module is designed to collect the system information. Each system is composed with a set of classes.
A class can be formed with a set of attributes and methods. The class module has three sub modules. They are the
overall class information, attribute information and the method information. The application maintains a separate
file for the system information.
6.3. Logs Module
The logs module is designed to maintain the software execution logs. The execution events are updated in
the log files. Different types of errors are maintained under the logs. The service error, service complaints and
interaction errors are highlighted in the logs. The log analysis process produces the summary of each error under
each class in the product.
6.4. Failure Module
The software failure module is designed to identify the failure rate in the software logs. The log event
details are analyzed with association rule mining and weighted association rule mining methods. The WARM
produces accurate failure levels in each class execution process.
VII. PERFORMANCE ANALYSIS
The software failure analysis is system is tested with different software products and log files. The Association
Rule Mining (ARM) and Weighted Association Rule Mining (WARM) techniques are used in the analysis
process. The false positive rate and false negative rate measures are used to evaluate the system accuracy levels.
False positive rate analysis details are compared in figure 7.1. The analysis result shows that the WARM reduces
the false positive error rate 30% than the ARM technique. The false negative error rate analysis is compared in
figure 7.2. The analysis result shows that the WARM model reduces the error rate 25% than the ARM technique.
Figure No: 7.1. False Positive Rate Analysis between RM Scheme and WRM Scheme
0
0.5
1
1.5
2
2.5
3
3.5
4
100 200 300 400 500
3.89
2.35
2.19
2.02
1.85
1.68
FalsePositiveRate(%)
Log Entries
False Positive Rate
Analysis between RM
Scheme and WRM
Scheme
RM
Scheme
WRM
Scheme
7. International Journal of Modern Trends in Engineering and Research (IJMTER)
Volume 02, Issue 01, [January - 2015] e-ISSN: 2349-9745, p-ISSN: 2393-8161
@IJMTER-2014, All rights Reserved 474
Figure No: 7.2. False Negative Rate Analysis between RM Scheme and WRM Scheme
VIII. CONCLUSION AND FUTURE WORK
The Association Rule Mining (ARM) technique is used to evaluate the logs to detect software failures.
The Apriori algorithm is used to identify the failure cases. The Weighted Association Rule Mining (WARM)
models are used to improve the accuracy of the failure rate estimation process. The software error log analysis
system is implemented using the Association Rule Mining (ARM) and Weighted Association Rule Mining
(WARM) techniques. The WARM model produces better detection accuracy level than the ARM model. The
system can be enhanced with the following features.
• The system can be enhanced to handle software failures automatically by controlling the software termination
operations.
• The offline error log analysis mechanism can be adapted to online error analysis model to manage web site
errors.
• The system can be enhanced with clustering techniques to fetch the error patterns.
• The system can be improved to identify the failure risk level based on the log and failure relationships.
REFERENCES
[1] H. Barringer, A. Groce, K. Havelund and M. Smith, “Formal Analysis of Log Files,” J. Aerospace Computing,
Information and Comm., pp. 365-390, 2010.
[2] M. Cinque, R. Natella, and A. Pecchia, “Assessing and Improving the Effectiveness of Logs for the Analysis of Software
Faults,” Proc. Int’l Conf. Dependable Systems and Networks, pp. 457-466, 2010.
[3] Venera Arnaoudova, Laleh M. Eshkevari, Rocco Oliveto and Giuliano Antoniol, “REPENT: Analyzing the Nature of
Identifier Renamings”, IEEE Transactions On Software Engineering, Vol. 40, No. 5, May 2014
[4] Cemal Yilmaz, Myra B. Cohen and Adam Porter, “Reducing Masking Effects in Combinatorial Interaction Testing: A
Feedback Driven Adaptive Approach”, IEEE Transactions On Software Engineering, Vol. 40, No. 1, January 2014.
[5] Marcello Cinque, Domenico Cotroneo and Antonio Pecchia, “Event Logs for the Analysis of Software Failures: A Rule-
Based Approach”, IEEE Transactions On Software Engineering, June 2013.
[6] IBM, “Common Event Infrastructure,” http://www-01.ibm.com/software/tivoli/features/cei, 2012.
[7] D. Yuan, J. Zheng, S. Park, Y. Zhou and S. Savage, “Improving Software Diagnosability via Log Enhancement,” Proc.
Int’l Conf. Architectural Support for Programming Languages and Operating Systems, pp. 3-14, 2011.
[8] A. Rabkin, W. Xu and R. Katz, “A Graphical Representation for Identifier Structure in Logs,” Proc. Workshop Managing
Systems via Log Analysis and Machine Learning Techniques, 2010.
[9] Apache log4j, http://logging.apache.org/log4j/, 2012.
[10] A. Pecchia, Z. Kalbarczyk and R.K. Iyer, “Improving Log-Based Field Failure Data Analysis of Multi-Node Computing
Systems,” Proc. Int’l Conf. Dependable Systems and Networks, pp. 97-108, 2011.
0
0.5
1
1.5
2
2.5
3
3.5
4
100 200 300 400 500
3.76
3.63
3.5
3.32
3.14
2.49
2.31
2.12
1.97
1.78
FalseNegativeRate(%)
Log Entries
False Negative Rate
Analysis between RM
Scheme and WRM
Scheme
RM
Scheme
WRM
Scheme