The adoption of cloud environment for various application uses has led to security and privacy concern of user’s data. To protect user data and privacy on such platform is an area of concern.
Many cryptography strategy has been presented to provide secure sharing of resource on cloud platform. These methods tries to achieve a secure authentication strategy to realize feature such as self-blindable access tickets, group signatures, anonymous access tickets, minimal disclosure of tickets and revocation but each one varies in realization of these features. Each feature requires different cryptography mechanism for realization. Due to this it induces computation complexity which affects the deployment of these models in practical application. Most of these techniques are designed for a particular application environment and adopt public key cryptography which incurs high cost due to computation complexity.
To address these issues this work present an secure and efficient privacy preserving of mining data on public cloud platform by adopting party and key based authentication strategy. The proposed SCPPDM (Secure Cloud Privacy Preserving Data Mining) is deployed on Microsoft azure cloud platform. Experiment is conducted to evaluate computation complexity. The outcome shows the proposed model achieves significant performance interm of computation overhead and cost.
An analysis of software aging in cloud environment IJECEIAES
Cloud computing is the environment in which several virtual machines (VM) run concurrently on physical machines. The cloud computing infrastructure hosts multiple cloud services that communicate with each other using the interfaces. During operation, the software systems accumulate errors or garbage that leads to system failure and other hazardous consequences. This status is called software aging. Software aging happens because of memory fragmentation, resource consumption in large scale and accumulation of numerical error. Software aging degrads the performance that may result in system failure. This happens because of premature resource exhaustion. The errors that cause software agings are of special types and target the response time and its environment. This issue is to be resolved only during run time as it occurs because of the dynamic nature of the problem. To alleviate the impact of software aging, software rejuvenation technique is being used. Rejuvenation process reboots the system or reinitiates the softwares. Software rejuvenation removes accumulated error conditions, frees up deadlocks and defragments operating system resources like memory. Software aging and rejuvenation has generated a lot of research interest recently. This work reviews some of the research works related to detection of software aging and identifies research gaps.
An Investigation of Fault Tolerance Techniques in Cloud Computingijtsrd
Cloud computing which is created on Internet has the most powerful architecture of computation that provides users with the capabilities of information technology as a service and allows them to have access to these services without having specialized information or controlling the infrastructure. Fault tolerance has. The main advantages of using fault tolerance that has all the necessary techniques to keep active power and reliability in cloud computing include failure recovery, lower costs, and improved performance criteria. In this paper, we will investigation of the different techniques that are used for fault tolerance on cloud computing. Ya Min | Khin Myat Nwe Win | Aye Mya Sandar "An Investigation of Fault Tolerance Techniques in Cloud Computing" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-3 | Issue-5 , August 2019, URL: https://www.ijtsrd.com/papers/ijtsrd26611.pdfPaper URL: https://www.ijtsrd.com/computer-science/distributed-computing/26611/an-investigation-of-fault-tolerance-techniques-in-cloud-computing/ya-min
Program aging is a degradation of performance or functionality caused by resource depletion. The aging affects the cloud
services which provide access to big data bank and computing resources. This suffers large budget and delays of defect removal, which
requires other related solutions including renewal in the form of controlled restarts. Collection of various runtime metrics are more
significant source for further study of detection and analysis of aging issues. This study highlights the method for detecting aging
immediately after their introduction by runtime comparisons of different development scenarios. The study focuses on aging of
program and service crash as a consequence.
Abstract
Researchers in the field of software engineering, business process improvement and information engineering all want to drastically modernize software life-cycle processes and technologies to correct the problems and to improve the quality of software. Research goals have included ancillary issues, such as improving user services through conversion to new platforms and facilitating software processes by adopting automated tools. Automated tools for software development, understanding, maintenance, and documentation add to process maturity, leading to better quality and reliability of computer services and greater customer satisfaction. This paper focuses on critical issues of legacy program improvement. The program improvement needs the estimation of program from various perspectives. The paper highlights various elements of legacy program complexity which further can be taken in account for further program development.
Keywords: Legacy, Program, Software complexity, Code, Integration
An analysis of software aging in cloud environment IJECEIAES
Cloud computing is the environment in which several virtual machines (VM) run concurrently on physical machines. The cloud computing infrastructure hosts multiple cloud services that communicate with each other using the interfaces. During operation, the software systems accumulate errors or garbage that leads to system failure and other hazardous consequences. This status is called software aging. Software aging happens because of memory fragmentation, resource consumption in large scale and accumulation of numerical error. Software aging degrads the performance that may result in system failure. This happens because of premature resource exhaustion. The errors that cause software agings are of special types and target the response time and its environment. This issue is to be resolved only during run time as it occurs because of the dynamic nature of the problem. To alleviate the impact of software aging, software rejuvenation technique is being used. Rejuvenation process reboots the system or reinitiates the softwares. Software rejuvenation removes accumulated error conditions, frees up deadlocks and defragments operating system resources like memory. Software aging and rejuvenation has generated a lot of research interest recently. This work reviews some of the research works related to detection of software aging and identifies research gaps.
An Investigation of Fault Tolerance Techniques in Cloud Computingijtsrd
Cloud computing which is created on Internet has the most powerful architecture of computation that provides users with the capabilities of information technology as a service and allows them to have access to these services without having specialized information or controlling the infrastructure. Fault tolerance has. The main advantages of using fault tolerance that has all the necessary techniques to keep active power and reliability in cloud computing include failure recovery, lower costs, and improved performance criteria. In this paper, we will investigation of the different techniques that are used for fault tolerance on cloud computing. Ya Min | Khin Myat Nwe Win | Aye Mya Sandar "An Investigation of Fault Tolerance Techniques in Cloud Computing" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-3 | Issue-5 , August 2019, URL: https://www.ijtsrd.com/papers/ijtsrd26611.pdfPaper URL: https://www.ijtsrd.com/computer-science/distributed-computing/26611/an-investigation-of-fault-tolerance-techniques-in-cloud-computing/ya-min
Program aging is a degradation of performance or functionality caused by resource depletion. The aging affects the cloud
services which provide access to big data bank and computing resources. This suffers large budget and delays of defect removal, which
requires other related solutions including renewal in the form of controlled restarts. Collection of various runtime metrics are more
significant source for further study of detection and analysis of aging issues. This study highlights the method for detecting aging
immediately after their introduction by runtime comparisons of different development scenarios. The study focuses on aging of
program and service crash as a consequence.
Abstract
Researchers in the field of software engineering, business process improvement and information engineering all want to drastically modernize software life-cycle processes and technologies to correct the problems and to improve the quality of software. Research goals have included ancillary issues, such as improving user services through conversion to new platforms and facilitating software processes by adopting automated tools. Automated tools for software development, understanding, maintenance, and documentation add to process maturity, leading to better quality and reliability of computer services and greater customer satisfaction. This paper focuses on critical issues of legacy program improvement. The program improvement needs the estimation of program from various perspectives. The paper highlights various elements of legacy program complexity which further can be taken in account for further program development.
Keywords: Legacy, Program, Software complexity, Code, Integration
This project is on “E-Filling System”. The purpose of the project is faster movement of files and documents through different layers of non-government and government offices. This system used to store and track electronic documents or file. It is very effective because centralized source of information, Import Security, Cost-effectiveness, improved workflow, maximized customer satisfaction, easy retrieval and flexible search. The aim of the Electronic-file System is to execute Paperless Communication system throughout the organization. Paperless Communication helps to accommodate the system smoothly and to ensure quick disposal of proposal. However, the goal of our system is not to become a slave of paper - searching for it, filing it, approved it, storing it and losing it at inconvenient times - but rather to handle paper electronically to lower its intrinsic administrative cost.
On-line Power System Static Security Assessment in a Distributed Computing Fr...idescitation
The computation overhead is of major concern when
going for increased accuracy in online power system security
assessment (OPSSA). This paper proposes a scalable solution
technique based on distributed computing architecture to
mitigate the problem. A variant of the master/slave pattern is
used for deploying the cluster of workstations (COW), which
act as the computational engine for the OPSSA. Owing to the
inherent parallel structure in security analysis algorithm, to
exploit the potential of distributed computing, domain
decomposition is adopted instead of functional decomposition.
The security assessment is performed utilizing the developed
composite security index that can accurately differentiate the
secure and non-secure cases and has been defined as a function
of bus voltage and line flow limit violations. Validity of
proposed architecture is demonstrated by the results obtained
from an intensive experimentation using the benchmark IEEE
57 bus test system. The proposed framework, which is scalable,
can be further extended to intelligent monitoring and control
of power system
A model for run time software architecture adaptationijseajournal
Since the global demand for software systems and constantly changing environments and systems is
increasing, the adaptability of software systems is of significant importance. Due to the architecture of
software system is a high-level view of the system and makes the modifiability possible at an overall level,
the adaptability of the software can be considered an effective approach to adapt software systems by
changing architecture configuration. In this study, the architecture configuration is modified through xADL
language which is a software architecture description language with a high flexibility. Software
architecture reconfiguration is done based on existing rules of rule-based system, which are written with
respect to three strategies of load balancing, fixed bandwidth and fixed latency. The proposed model of the
study is simulated based on samples of client-server system, video conferencing system and students’
grading system. The proposed model can be used in all types of architecture, include Client Server
Architecture, Service Oriented Architecture and etc.
Cluster Computing Environment for On - line Static Security Assessment of lar...IDES Editor
The increased size of modern power systems
demand faster and accurate means for the security assessment,
so that the decisions for reliable and secure operation planning
could be drawn in a systematic manner. Large computational
overhead is the major impediment in preventing the power
system security assessment (PSSA) from on-line use. To
mitigate this problem, this paper proposes, a cluster computing
based architecture for power system static security assessment,
utilizing the tools in the open source domain. A variant of the
master/slave pattern is used for deploying the cluster of
workstations (COW), which act as the computational engine
for the on-line PSSA. The security assessment is performed
utilizing the developed composite security index that can
accurately differentiate the secure and non-secure cases and
has been defined as a function of bus voltage and line flow
limit violations. Due to the inherent parallel structure of
security assessment algorithm and to exploit the potential of
distributed computing, domain decomposition is employed for
parallelizing the sequential algorithm. Extensive
experimentations were carried out on IEEE 57 bus and IEEE
145-bus 50 machine standard test systems for demonstrating
the validity of the proposed architecture.
Proactive cloud service assurance framework for fault remediation in cloud en...IJECEIAES
Cloud resiliency is an important issue in successful implementation of cloud computing systems. Handling cloud faults proactively, with a suitable remediation technique having minimum cost is an important requirement for a fault management system. The selection of best applicable remediation technique is a decision making problem and considers parameters such as i) Impact of remediation technique ii) Overhead of remediation technique ii) Severity of fault and iv) Priority of the application. This manuscript proposes an analytical model to measure the effectiveness of a remediation technique for various categories of faults, further it demonstrates the implementation of an efficient fault remediation system using a rulebased expert system. The expert system is designed to compute a utility value for each remediation technique in a novel way and select the best remediation technique from its knowledgebase. A prototype is developed for experimentation purpose and the results shows improved availability with less overhead as compared to a reactive fault management system.
A LOG-BASED TRACE AND REPLAY TOOL INTEGRATING SOFTWARE AND INFRASTRUCTUREijseajournal
We propose a log-based analysis tool for evaluating web application computer system. A feature of the tool is an integration software log with infrastructure log. Software engineers alone can resolve system faults in the tool, even if the faults are complicated by both software problems and infrastructure problems. The tool consists of 5 steps: preparation software, preparation infrastructure, collecting logs, replaying the log data, and tracing the log data. The tool was applied to a simple web application system in a small-scale
local area network. We confirmed usefulness of the tool when a software engineer detects faults of the system failures such as “404” and “no response” errors. In addition, the tool was partially applied to a real large-scale computer system with many web applications and large network environment. Using the replaying and the tracing in the tool, we found causes of a real authentication error. The causes were combined an infrastructure problem with a software problem. Even if the failure is caused by not only a software problem but also an infrastructure problem, we confirmed that software engineers distinguish between a software problem and an infrastructure problem using the tool.
AN INVESTIGATION OF THE MONITORING ACTIVITY IN SELF ADAPTIVE SYSTEMSijseajournal
Runtime monitoring is essential for the violation detection during the underlying software system execution. In this paper, an investigation of the monitoring activity of MAPE-K control loop is performed which aims at exploring:(1) the architecture of the monitoring activity in terms of the involved components
and control and data flow between them; (2) the standard interface of the monitoring component with other MAPE-K components; (3) the adaptive monitoring and its importance to the monitoring overhead issue; and (4) the monitoring mode and its relevance to some specific situations and systems. This paper also presented a Java framework for the monitoring process for self adaptive systems.
To Get any Project for CSE, IT ECE, EEE Contact Me @ 09666155510, 09849539085 or mail us - ieeefinalsemprojects@gmail.com-Visit Our Website: www.finalyearprojects.org
Privacy Preserving Public Auditing and Data Integrity for Secure Cloud Storag...INFOGAIN PUBLICATION
Using cloud services, anyone can remotely store their data and can have the on-demand high quality applications and services from a shared pool of computing resources, without the burden of local data storage and maintenance. Cloud is a commonplace for storing data as well as sharing of that data. However, preserving the privacy and maintaining integrity of data during public auditing remains to be an open challenge. In this paper, we introducing a third party auditor (TPA), which will keep track of all the files along with their integrity. The task of TPA is to verify the data, so that the user will be worry-free. Verification of data is done on the aggregate authenticators sent by the user and Cloud Service Provider (CSP). For this, we propose a secure cloud storage system which supports privacy-preserving public auditing and blockless data verification over the cloud
Data migration system in heterogeneous databaseeSAT Journals
Abstract With information becoming an increasingly valuable corporate asset, today's IT organizations need the right tools to store, manage, and move that information in the most reliable and cost efficient manner. As part of an Information Lifecycle Management (ILM) best-practices strategy, organizations require innovative solutions for migrating data between storage systems, especially in heterogeneous environments. To support this need, we as planed to design a powerful tool that enables affordable, high- performance data migration in a wide range of storage environments. This project is the unique challenges of data migration in dynamic IT environments and the key business advantages that we have designed to provide over traditional tools used for migration. Keywords: - Data Migration, Database, Design.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
Software aging prediction – a new approach IJECEIAES
To meet the users’ requirements which are very diverse in recent days, computing infrastructure has become complex. An example of one such infrastructure is a cloud-based system. These systems suffer from resource exhaustion in the long run which leads to performance degradation. This phenomenon is called software aging. There is a need to predict software aging to carry out pre-emptive rejuvenation that enhances service availability. Software rejuvenation is the technique that refreshes the system and brings it back to a healthy state. Hence, software aging should be predicted in advance to trigger the rejuvenation process to improve service availability. In this work, the k-nearest neighbor (k-NN) algorithm-based new approach has been used to identify the virtual machine's status, and a prediction of resource exhaustion time has been made. The proposed prediction model uses static thresholding and adaptive thresholding methods. The performance of the algorithms is compared, and it is found that for classification, the k-NN performs comparatively better, i.e., k-NN showed an accuracy of 97.6. In contrast, its counterparts performed with an accuracy of 96.0 (naïve Bayes) and 92.8 (decision tree). The comparison of the proposed work with previous similar works has also been discussed.
A Review on Software Fault Detection and Prevention Mechanism in Software Dev...iosrjce
IOSR Journal of Computer Engineering (IOSR-JCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of computer engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in computer technology. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
This project is on “E-Filling System”. The purpose of the project is faster movement of files and documents through different layers of non-government and government offices. This system used to store and track electronic documents or file. It is very effective because centralized source of information, Import Security, Cost-effectiveness, improved workflow, maximized customer satisfaction, easy retrieval and flexible search. The aim of the Electronic-file System is to execute Paperless Communication system throughout the organization. Paperless Communication helps to accommodate the system smoothly and to ensure quick disposal of proposal. However, the goal of our system is not to become a slave of paper - searching for it, filing it, approved it, storing it and losing it at inconvenient times - but rather to handle paper electronically to lower its intrinsic administrative cost.
On-line Power System Static Security Assessment in a Distributed Computing Fr...idescitation
The computation overhead is of major concern when
going for increased accuracy in online power system security
assessment (OPSSA). This paper proposes a scalable solution
technique based on distributed computing architecture to
mitigate the problem. A variant of the master/slave pattern is
used for deploying the cluster of workstations (COW), which
act as the computational engine for the OPSSA. Owing to the
inherent parallel structure in security analysis algorithm, to
exploit the potential of distributed computing, domain
decomposition is adopted instead of functional decomposition.
The security assessment is performed utilizing the developed
composite security index that can accurately differentiate the
secure and non-secure cases and has been defined as a function
of bus voltage and line flow limit violations. Validity of
proposed architecture is demonstrated by the results obtained
from an intensive experimentation using the benchmark IEEE
57 bus test system. The proposed framework, which is scalable,
can be further extended to intelligent monitoring and control
of power system
A model for run time software architecture adaptationijseajournal
Since the global demand for software systems and constantly changing environments and systems is
increasing, the adaptability of software systems is of significant importance. Due to the architecture of
software system is a high-level view of the system and makes the modifiability possible at an overall level,
the adaptability of the software can be considered an effective approach to adapt software systems by
changing architecture configuration. In this study, the architecture configuration is modified through xADL
language which is a software architecture description language with a high flexibility. Software
architecture reconfiguration is done based on existing rules of rule-based system, which are written with
respect to three strategies of load balancing, fixed bandwidth and fixed latency. The proposed model of the
study is simulated based on samples of client-server system, video conferencing system and students’
grading system. The proposed model can be used in all types of architecture, include Client Server
Architecture, Service Oriented Architecture and etc.
Cluster Computing Environment for On - line Static Security Assessment of lar...IDES Editor
The increased size of modern power systems
demand faster and accurate means for the security assessment,
so that the decisions for reliable and secure operation planning
could be drawn in a systematic manner. Large computational
overhead is the major impediment in preventing the power
system security assessment (PSSA) from on-line use. To
mitigate this problem, this paper proposes, a cluster computing
based architecture for power system static security assessment,
utilizing the tools in the open source domain. A variant of the
master/slave pattern is used for deploying the cluster of
workstations (COW), which act as the computational engine
for the on-line PSSA. The security assessment is performed
utilizing the developed composite security index that can
accurately differentiate the secure and non-secure cases and
has been defined as a function of bus voltage and line flow
limit violations. Due to the inherent parallel structure of
security assessment algorithm and to exploit the potential of
distributed computing, domain decomposition is employed for
parallelizing the sequential algorithm. Extensive
experimentations were carried out on IEEE 57 bus and IEEE
145-bus 50 machine standard test systems for demonstrating
the validity of the proposed architecture.
Proactive cloud service assurance framework for fault remediation in cloud en...IJECEIAES
Cloud resiliency is an important issue in successful implementation of cloud computing systems. Handling cloud faults proactively, with a suitable remediation technique having minimum cost is an important requirement for a fault management system. The selection of best applicable remediation technique is a decision making problem and considers parameters such as i) Impact of remediation technique ii) Overhead of remediation technique ii) Severity of fault and iv) Priority of the application. This manuscript proposes an analytical model to measure the effectiveness of a remediation technique for various categories of faults, further it demonstrates the implementation of an efficient fault remediation system using a rulebased expert system. The expert system is designed to compute a utility value for each remediation technique in a novel way and select the best remediation technique from its knowledgebase. A prototype is developed for experimentation purpose and the results shows improved availability with less overhead as compared to a reactive fault management system.
A LOG-BASED TRACE AND REPLAY TOOL INTEGRATING SOFTWARE AND INFRASTRUCTUREijseajournal
We propose a log-based analysis tool for evaluating web application computer system. A feature of the tool is an integration software log with infrastructure log. Software engineers alone can resolve system faults in the tool, even if the faults are complicated by both software problems and infrastructure problems. The tool consists of 5 steps: preparation software, preparation infrastructure, collecting logs, replaying the log data, and tracing the log data. The tool was applied to a simple web application system in a small-scale
local area network. We confirmed usefulness of the tool when a software engineer detects faults of the system failures such as “404” and “no response” errors. In addition, the tool was partially applied to a real large-scale computer system with many web applications and large network environment. Using the replaying and the tracing in the tool, we found causes of a real authentication error. The causes were combined an infrastructure problem with a software problem. Even if the failure is caused by not only a software problem but also an infrastructure problem, we confirmed that software engineers distinguish between a software problem and an infrastructure problem using the tool.
AN INVESTIGATION OF THE MONITORING ACTIVITY IN SELF ADAPTIVE SYSTEMSijseajournal
Runtime monitoring is essential for the violation detection during the underlying software system execution. In this paper, an investigation of the monitoring activity of MAPE-K control loop is performed which aims at exploring:(1) the architecture of the monitoring activity in terms of the involved components
and control and data flow between them; (2) the standard interface of the monitoring component with other MAPE-K components; (3) the adaptive monitoring and its importance to the monitoring overhead issue; and (4) the monitoring mode and its relevance to some specific situations and systems. This paper also presented a Java framework for the monitoring process for self adaptive systems.
To Get any Project for CSE, IT ECE, EEE Contact Me @ 09666155510, 09849539085 or mail us - ieeefinalsemprojects@gmail.com-Visit Our Website: www.finalyearprojects.org
Privacy Preserving Public Auditing and Data Integrity for Secure Cloud Storag...INFOGAIN PUBLICATION
Using cloud services, anyone can remotely store their data and can have the on-demand high quality applications and services from a shared pool of computing resources, without the burden of local data storage and maintenance. Cloud is a commonplace for storing data as well as sharing of that data. However, preserving the privacy and maintaining integrity of data during public auditing remains to be an open challenge. In this paper, we introducing a third party auditor (TPA), which will keep track of all the files along with their integrity. The task of TPA is to verify the data, so that the user will be worry-free. Verification of data is done on the aggregate authenticators sent by the user and Cloud Service Provider (CSP). For this, we propose a secure cloud storage system which supports privacy-preserving public auditing and blockless data verification over the cloud
Data migration system in heterogeneous databaseeSAT Journals
Abstract With information becoming an increasingly valuable corporate asset, today's IT organizations need the right tools to store, manage, and move that information in the most reliable and cost efficient manner. As part of an Information Lifecycle Management (ILM) best-practices strategy, organizations require innovative solutions for migrating data between storage systems, especially in heterogeneous environments. To support this need, we as planed to design a powerful tool that enables affordable, high- performance data migration in a wide range of storage environments. This project is the unique challenges of data migration in dynamic IT environments and the key business advantages that we have designed to provide over traditional tools used for migration. Keywords: - Data Migration, Database, Design.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
Software aging prediction – a new approach IJECEIAES
To meet the users’ requirements which are very diverse in recent days, computing infrastructure has become complex. An example of one such infrastructure is a cloud-based system. These systems suffer from resource exhaustion in the long run which leads to performance degradation. This phenomenon is called software aging. There is a need to predict software aging to carry out pre-emptive rejuvenation that enhances service availability. Software rejuvenation is the technique that refreshes the system and brings it back to a healthy state. Hence, software aging should be predicted in advance to trigger the rejuvenation process to improve service availability. In this work, the k-nearest neighbor (k-NN) algorithm-based new approach has been used to identify the virtual machine's status, and a prediction of resource exhaustion time has been made. The proposed prediction model uses static thresholding and adaptive thresholding methods. The performance of the algorithms is compared, and it is found that for classification, the k-NN performs comparatively better, i.e., k-NN showed an accuracy of 97.6. In contrast, its counterparts performed with an accuracy of 96.0 (naïve Bayes) and 92.8 (decision tree). The comparison of the proposed work with previous similar works has also been discussed.
A Review on Software Fault Detection and Prevention Mechanism in Software Dev...iosrjce
IOSR Journal of Computer Engineering (IOSR-JCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of computer engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in computer technology. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
REGULARIZED FUZZY NEURAL NETWORKS TO AID EFFORT FORECASTING IN THE CONSTRUCTI...ijaia
Predicting the time to build software is a very complex task for software engineering managers. There are complex factors that can directly interfere with the productivity of the development team. Factors directly related to the complexity of the system to be developed drastically change the time necessary for the completion of the works with the software factories. This work proposes the use of a hybrid system based on artificial neural networks and fuzzy systems to assist in the construction of an expert system based on rules to support in the prediction of hours destined to the development of software according to the complexity of the elements present in the same. The set of fuzzy rules obtained by the system helps the management and control of software development by providing a base of interpretable estimates based on fuzzy rules. The model was submitted to tests on a real database, and its results were promissory in the construction of an aid mechanism in the predictability of the software construction
Research Inventy : International Journal of Engineering and Scienceinventy
Research Inventy : International Journal of Engineering and Science is published by the group of young academic and industrial researchers with 12 Issues per year. It is an online as well as print version open access journal that provides rapid publication (monthly) of articles in all areas of the subject such as: civil, mechanical, chemical, electronic and computer engineering as well as production and information technology. The Journal welcomes the submission of manuscripts that meet the general criteria of significance and scientific excellence. Papers will be published by rapid process within 20 days after acceptance and peer review process takes only 7 days. All articles published in Research Inventy will be peer-reviewed.
Implementation of reducing features to improve code change based bug predicti...eSAT Journals
Abstract Today, we are getting plenty of bugs in the software because of variations in the software and hardware technologies. Bugs are nothing but Software faults, existing a severe challenge for system reliability and dependability. To identify the bugs from the software bug prediction is convenient approach. To visualize the presence of a bug in a source code file, recently, Machine learning classifiers approach is developed. Because of a huge number of machine learned features current classifier-based bug prediction have two major problems i) inadequate precision for practical usage ii) measured prediction time. In this paper we used two techniques first, cos-triage algorithm which have a go to enhance the accuracy and also lower the price of bug prediction and second, feature selection methods which eliminate less significant features. Reducing features get better the quality of knowledge extracted and also boost the speed of computation. Keywords: Efficiency, Bug Prediction, Classification, Feature Selection, Accuracy
EMPIRICAL APPLICATION OF SIMULATED ANNEALING USING OBJECT-ORIENTED METRICS TO...ijcsa
The work is about using Simulated Annealing Algorithm for the effort estimation model parameter
optimization which can lead to the reduction in the difference in actual and estimated effort used in model
development.
The model has been tested using OOP’s dataset, obtained from NASA for research purpose.The data set
based model equation parameters have been found that consists of two independent variables, viz. Lines of
Code (LOC) along with one more attribute as a dependent variable related to software development effort
(DE). The results have been compared with the earlier work done by the author on Artificial Neural
Network (ANN) and Adaptive Neuro Fuzzy Inference System (ANFIS) and it has been observed that the
developed SA based model is more capable to provide better estimation of software development effort than
ANN and ANFIS
QUALITY-AWARE APPROACH FOR ENGINEERING SELF-ADAPTIVE SOFTWARE SYSTEMScscpconf
Self-adaptivity allows software systems to autonomously adjust their behavior during run-time to reduce
the cost complexities caused by manual maintenance. In this paper, an approach for building an external
adaptation engine for self-adaptive software systems is proposed. In order to improve the quality of selfadaptive
software systems, this research addresses two challenges in self-adaptive software systems. The
first challenge is managing the complexity of the adaptation space efficiently and the second is handling the
run-time uncertainty that hinders the adaptation process. This research utilizes Case-based Reasoning as
an adaptation engine along with utility functions for realizing the managed system’s requirements and
handling uncertainty.
Quality aware approach for engineering self-adaptive software systemscsandit
Self-adaptivity allows software systems to autonomously adjust their behavior during run-time to reduce
the cost complexities caused by manual maintenance. In this paper, an approach for building an external
adaptation engine for self-adaptive software systems is proposed. In order to improve the quality of selfadaptive
software systems, this research addresses two challenges in self-adaptive software systems. The
first challenge is managing the complexity of the adaptation space efficiently and the second is handling the
run-time uncertainty that hinders the adaptation process. This research utilizes Case-based Reasoning as
an adaptation engine along with utility functions for realizing the managed system’s requirements and
handling uncertainty.
Association Rule Mining Scheme for Software Failure AnalysisEditor IJMTER
The software execution process is tracked with event logs. The event logs are used to maintain the
execution process flow in a textual log file. The log file also manages the error values and their source of classes.
The error values are used to analyze the failure of the software. The data mining methods are used to evaluate the
quality and software failure rate analysis process. The text logs are processed and data values are extracted from
the data values. The data values are mined using the machine learning methods for failure analysis.
The service error, service complaints, interaction error and crash errors are maintained under the log files.
The events and their reactions are also maintained under the log files. Software termination and execution failures
are identified using the log details. The log file parsing process is applied to extract data from the logs. The
associations rule mining methods are used to analyze the log files for failure detection process. The system uses
the Weighted Association Rule Mining (WARM) scheme to fetch failure rate in the software execution flow. The
system improves the failure rate detection accuracy in WARM model.
Contributors to Reduce Maintainability Cost at the Software Implementation PhaseWaqas Tariq
Software maintenance is important and difficult to measure. The cost of maintenance is the most ever during the phases of software development. One of the most critical processes in software development is the reduction of software maintainability cost based on the quality of source code during design step, however, a lack of quality models and measures can help asses the quality attributes of software maintainability process. Software maintainability suffers from a number of challenges such as lack source code understanding, quality of software code, and adherence to programming standards in maintenance. This work describes model based-factors to assess the software maintenance, explains the steps followed to obtain and validate them. Such a method can be used to eliminate the software maintenance cost. The research results will enhance the quality of the source code. It will increase software understandability, eliminate maintenance time, cost, and give confidence for software reusability.
A methodology to evaluate object oriented software systems using change requi...ijseajournal
It is a well known fact that software maintenance plays a major role and finds importance in software
development life cycle. As object
-
oriented programming has become the standard, it is very important to
understand th
e problems of maintaining object
-
oriented software systems. This paper aims at evaluating
object
-
oriented software system through change requirement traceability
–
based impact analysis
methodology
for non functional requirements using functional requirem
ents
. The major issues have been
related to change impact algorithms and inheritance of functionality.
DESIGN PATTERNS IN THE WORKFLOW IMPLEMENTATION OF MARINE RESEARCH GENERAL INF...AM Publications
This paper proposes the use of design patterns in a marine research general information platform. The development of the platform refers to a design of complicated system architecture. Creation and execution of the research workflow nodes and designing of visualization library suited for marine users play an important role in the whole software architecture. This paper studies the requirements characteristic in marine research fields and has implemented a series of framework to solve these problems based on object-oriented and design patterns techniques. These frameworks make clear the relationship in all directions between modules and layers of software, which communicate through unified abstract interface and reduce the coupling between modules and layers. The building of these frameworks is importantly significant in advancing the reusability of software and strengthening extensibility and maintainability of the system.
In our homes or offices, security has been a vital issue. Control of home security system remotely always offers huge advantages like the arming or disarming of the alarms, video monitoring, and energy management control apart from safeguarding the home free up intruders. Considering the oldest simple methods of security that is the mechanical lock system that has a key as the authentication element, then an upgrade to a universal type, and now unique codes for the lock. The recent advancement in the communication system has brought the tremendous application of communication gadgets into our various areas of life. This work is a real-time smart doorbell notification system for home Security as opposes of the traditional security methods, it is composed of the doorbell interfaced with GSM Module, a GSM module would be triggered to send an SMS to the house owner by pressing the doorbell, the owner will respond to the guest by pressing a button to open the door, otherwise, a message would be displayed to the guest for appropriate action. Then, the keypad is provided for an authorized person for the provision of password for door unlocking, if multiple wrong password attempts were made to unlock, a message of burglary attempt would be sent to the house owner for prompt action. The main benefit of this system is the uniqueness of the incorporation of the password and messaging systems which denies access to any unauthorized personality and owner's awareness method.
Augmented reality, the new age technology, has widespread applications in every field imaginable. This technology has proven to be an inflection point in numerous verticals, improving lives and improving performance. In this paper, we explore the various possible applications of Augmented Reality (AR) in the field of Medicine. The objective of using AR in medicine or generally in any field is the fact that, AR helps in motivating the user, making sessions interactive and assist in faster learning. In this paper, we discuss about the applicability of AR in the field of medical diagnosis. Augmented reality technology reinforces remote collaboration, allowing doctors to diagnose patients from a different locality. Additionally, we believe that a much more pronounced effect can be achieved by bringing together the cutting edge technology of AR and the lifesaving field of Medical sciences. AR is a mechanism that could be applied in the learning process too. Similarly, virtual reality could be used in the field where more of practical experience is needed such as driving, sports, neonatal care training.
Image fusion is a sub field of image processing in which more than one images are fused to create an image where all the objects are in focus. The process of image fusion is performed for multi-sensor and multi-focus images of the same scene. Multi-sensor images of the same scene are captured by different sensors whereas multi-focus images are captured by the same sensor. In multi-focus images, the objects in the scene which are closer to the camera are in focus and the farther objects get blurred. Contrary to it, when the farther objects are focused then closer objects get blurred in the image. To achieve an image where all the objects are in focus, the process of images fusion is performed either in spatial domain or in transformed domain. In recent times, the applications of image processing have grown immensely. Usually due to limited depth of field of optical lenses especially with greater focal length, it becomes impossible to obtain an image where all the objects are in focus. Thus, it plays an important role to perform other tasks of image processing such as image segmentation, edge detection, stereo matching and image enhancement. Hence, a novel feature-level multi-focus image fusion technique has been proposed which fuses multi-focus images. Thus, the results of extensive experimentation performed to highlight the efficiency and utility of the proposed technique is presented. The proposed work further explores comparison between fuzzy based image fusion and neuro fuzzy fusion technique along with quality evaluation indices.
Graphs have become the dominant life-form of many tasks as they advance a
structure to represent many tasks and the corresponding relations. A powerful
role of networks/graphs is to bridge local feats that exist in vertices as they
blossom into patterns that help explain how nodal relations and their edges
impacts a complex effect that ripple via a graph. User cluster are formed as a
result of interactions between entities. Many users can hardly categorize their
contact into groups today such as “family”, “friends”, “colleagues” etc. Thus,
the need to analyze such user social graph via implicit clusters, enables the
dynamism in contact management. Study seeks to implement this dynamism
via a comparative study of deep neural network and friend suggest algorithm.
We analyze a user’s implicit social graph and seek to automatically create
custom contact groups using metrics that classify such contacts based on a
user’s affinity to contacts. Experimental results demonstrate the importance
of both the implicit group relationships and the interaction-based affinity in
suggesting friends.
This paper projects Gryllidae Optimization Algorithm (GOA) has been applied to solve optimal reactive power problem. Proposed GOA approach is based on the chirping characteristics of Gryllidae. In common, male Gryllidae chirp, on the other hand some female Gryllidae also do as well. Male Gryllidae draw the females by this sound which they produce. Moreover, they caution the other Gryllidae against dangers with this sound. The hearing organs of the Gryllidae are housed in an expansion of their forelegs. Through this, they bias to the produced fluttering sounds. Proposed Gryllidae Optimization Algorithm (GOA) has been tested in standard IEEE 14, 30 bus test systems and simulation results show that the projected algorithms reduced the real power loss considerably.
In the wake of the sudden replacement of wood and kerosene by gas cookers for several purposes in Nigeria, gas leakage has caused several damages in our homes, Laboratories among others. installation of a gas leakage detection device was globally inspired to eliminate accidents related to gas leakage. We present an alternative approach to developing a device that can automatically detect and control gas leakages and also monitor temperature. The system detects the leakage of the LPG (Liquefied Petroleum Gas) using a gas sensor, then triggred the control system response which employs ventilator system, Mobile phone alert and alarm when the LPG concentration in the air exceeds a certain level. The performance of two gas sensors (MQ5 and MQ6) were tested for a guided decision. Also, when the temperature of the environment poses a danger, LED (indicator), buzzer and LCD (16x2) display was used to indicate temperature and gas leakage status in degree Celsius and PPM respectively. Attension was given to the response time of the control system, which was ascertained that this system significantly increases the chances and efficiency of eliminating gas leakage related accident.
Feature selection problem is one of the main important problems in the text and data mining domain. This paper presents a comparative study of feature selection methods for Arabic text classification. Five of the feature selection methods were selected: ICHI square, CHI square, Information Gain, Mutual Information and Wrapper. It was tested with five classification algorithms: Bayes Net, Naive Bayes, Random Forest, Decision Tree and Artificial Neural Networks. In addition, Data Collection was used in Arabic consisting of 9055 documents, which were compared by four criteria: Precision, Recall, F-measure and Time to build model. The results showed that the improved ICHI feature selection got almost all the best results in comparison with other methods.
In this paper Gentoo Penguin Algorithm (GPA) is proposed to solve optimal reactive power problem. Gentoo Penguins preliminary population possesses heat radiation and magnetizes each other by absorption coefficient. Gentoo Penguins will move towards further penguins which possesses low cost (elevated heat concentration) of absorption. Cost is defined by the heat concentration, distance. Gentoo Penguins penguin attraction value is calculated by the amount of heat prevailed between two Gentoo penguins. Gentoo Penguins heat radiation is measured as linear. Less heat is received in longer distance, in little distance, huge heat is received. Gentoo Penguin Algorithm has been tested in standard IEEE 57 bus test system and simulation results show the projected algorithm reduced the real power loss considerably.
08 20272 academic insight on applicationIAESIJEECS
This research has thrown up many questions in need of further investigation.There was an expressive quantitative-qualitative research, which a common investigation form was used in.The dialogue item was also applied to discover if the contributors asserted the media-based attitude supplements their learning of academic English writing classes or not.Data recounted academic” insights toward using Skype as a sustaining implement for lessons releasing based on chosen variables: their occupation, year of education, and knowledge with Skype discovered that there were no important statistical differences in the use of Skype units because of medical academics major knowledge. There are statistically important differences in using Skype units. The findings also, disclosed that there are statistically significant differences in using Skype units due to the practice with Skype variable, in favors of academics with no Skype use practice. Skype instrument as an instructive media is a positive medium to be employed to supply academic medical writing data and assist education. Academics who do not have enough time to contribute in classes believe comfortable using the Skype-based attitude in scientific writing. They who took part in the course claimed that their approval of this media is due to learning academic innovative medical writing.
Cloud computing has sweeping impact on the human productivity. Today it’s used for Computing, Storage, Predictions and Intelligent Decision Making, among others. Intelligent Decision-Making using Machine Learning has pushed for the Cloud Services to be even more fast, robust and accurate. Security remains one of the major concerns which affect the cloud computing growth however there exist various research challenges in cloud computing adoption such as lack of well managed service level agreement (SLA), frequent disconnections, resource scarcity, interoperability, privacy, and reliability. Tremendous amount of work still needs to be done to explore the security challenges arising due to widespread usage of cloud deployment using Containers. We also discuss Impact of Cloud Computing and Cloud Standards. Hence in this research paper, a detailed survey of cloud computing, concepts, architectural principles, key services, and implementation, design and deployment challenges of cloud computing are discussed in detail and important future research directions in the era of Machine Learning and Data Science have been identified.
Notary is an official authorized to make an authentic deed regarding all deeds, agreements and stipulations required by a general rule. Activities carried out at the notary office such as recording client data and file data still use traditional systems that tend to be manual. The problem that occurs is the inefficiency in data processing and providing information to clients. Clients have difficulty getting information related to the progress of documents that are being taken care of at the notary's office. The client must take the time to arrive to the notary's office repeatedly to check the progress of the work of the document file. The purpose of this study is to facilitate clients in obtaining information about the progress of the work in progress, and make it easier for employees to process incoming documents by implementing an administrative system. This system was developed with the waterfall system development method and uses the Multi-Channel Access Technology integrated in the website to simplify the process of delivering information and requesting information from clients and to clients with Telegram and SMS Gateway. Clients will come to the office only when there is a notification from the system via Telegram or SMS notifying that the client must come directly to the notary's office, thus leading to an efficient time and avoiding excessive transportation costs. The overall functional system can function properly based on the results of alpha testing. The results of beta testing conducted by distributing the system feasibility test questionnaire to end users, get a percentage of 96% of users agree the system is feasible to be implemented.
In this work Tundra wolf algorithm (TWA) is proposed to solve the optimal reactive power problem. In the projected Tundra wolf algorithm (TWA) in order to avoid the searching agents from trapping into the local optimal the converging towards global optimal is divided based on two different conditions. In the proposed Tundra wolf algorithm (TWA) omega tundra wolf has been taken as searching agent as an alternative of indebted to pursue the first three most excellent candidates. Escalating the searching agents’ numbers will perk up the exploration capability of the Tundra wolf wolves in an extensive range. Proposed Tundra wolf algorithm (TWA) has been tested in standard IEEE 14, 30 bus test systems and simulation results show the proposed algorithm reduced the real power loss effectively.
In this work Predestination of Particles Wavering Search (PPS) algorithm has been applied to solve optimal reactive power problem. PPS algorithm has been modeled based on the motion of the particles in the exploration space. Normally the movement of the particle is based on gradient and swarming motion. Particles are permitted to progress in steady velocity in gradient-based progress, but when the outcome is poor when compared to previous upshot, immediately particle rapidity will be upturned with semi of the magnitude and it will help to reach local optimal solution and it is expressed as wavering movement. In standard IEEE 14, 30, 57,118,300 bus systems Proposed Predestination of Particles Wavering Search (PPS) algorithm is evaluated and simulation results show the PPS reduced the power loss efficiently.
In this paper, Mine Blast Algorithm (MBA) has been intermingled with Harmony Search (HS) algorithm for solving optimal reactive power dispatch problem. MBA is based on explosion of landmines and HS is based on Creativeness progression of musicians-both are hybridized to solve the problem. In MBA Initial distance of shrapnel pieces are reduced gradually to allow the mine bombs search the probable global minimum location in order to amplify the global explore capability. Harmony search (HS) imitates the music creativity process where the musicians supervise their instruments’ pitch by searching for a best state of harmony. Hybridization of Mine Blast Algorithm with Harmony Search algorithm (MH) improves the search effectively in the solution space. Mine blast algorithm improves the exploration and harmony search algorithm augments the exploitation. At first the proposed algorithm starts with exploration & gradually it moves to the phase of exploitation. Proposed Hybridized Mine Blast Algorithm with Harmony Search algorithm (MH) has been tested on standard IEEE 14, 300 bus test systems. Real power loss has been reduced considerably by the proposed algorithm. Then Hybridized Mine Blast Algorithm with Harmony Search algorithm (MH) tested in IEEE 30, bus system (with considering voltage stability index)- real power loss minimization, voltage deviation minimization, and voltage stability index enhancement has been attained.
Artificial Neural Networks have proved their efficiency in a large number of research domains. In this paper, we have applied Artificial Neural Networks on Arabic text to prove correct language modeling, text generation, and missing text prediction. In one hand, we have adapted Recurrent Neural Networks architectures to model Arabic language in order to generate correct Arabic sequences. In the other hand, Convolutional Neural Networks have been parameterized, basing on some specific features of Arabic, to predict missing text in Arabic documents. We have demonstrated the power of our adapted models in generating and predicting correct Arabic text comparing to the standard model. The model had been trained and tested on known free Arabic datasets. Results have been promising with sufficient accuracy.
In the present-day communications speech signals get contaminated due to
various sorts of noises that degrade the speech quality and adversely impacts
speech recognition performance. To overcome these issues, a novel approach
for speech enhancement using Modified Wiener filtering is developed and
power spectrum computation is applied for degraded signal to obtain the
noise characteristics from a noisy spectrum. In next phase, MMSE technique
is applied where Gaussian distribution of each signal i.e. original and noisy
signal is analyzed. The Gaussian distribution provides spectrum estimation
and spectral coefficient parameters which can be used for probabilistic model
formulation. Moreover, a-priori-SNR computation is also incorporated for
coefficient updation and noise presence estimation which operates similar to
the conventional VAD. However, conventional VAD scheme is based on the
hard threshold which is not capable to derive satisfactory performance and a
soft-decision based threshold is developed for improving the performance of
speech enhancement. An extensive simulation study is carried out using
MATLAB simulation tool on NOIZEUS speech database and a comparative
study is presented where proposed approach is proved better in comparison
with existing technique.
Previous research work has highlighted that neuro-signals of Alzheimer’s disease patients are least complex and have low synchronization as compared to that of healthy and normal subjects. The changes in EEG signals of Alzheimer’s subjects start at early stage but are not clinically observed and detected. To detect these abnormalities, three synchrony measures and wavelet-based features have been computed and studied on experimental database. After computing these synchrony measures and wavelet features, it is observed that Phase Synchrony and Coherence based features are able to distinguish between Alzheimer’s disease patients and healthy subjects. Support Vector Machine classifier is used for classification giving 94% accuracy on experimental database used. Combining, these synchrony features and other such relevant features can yield a reliable system for diagnosing the Alzheimer’s disease.
Attenuation correction designed for PET/MR hybrid imaging frameworks along with portion making arrangements used for MR-based radiation treatment remain testing because of lacking high-energy photon weakening data. We present a new method so as to uses the learned nonlinear neighborhood descriptors also highlight coordinating toward foresee pseudo-CT pictures starting T1w along with T2w MRI information. The nonlinear neighborhood descriptors are acquired through anticipating the direct descriptors interested in the nonlinear high-dimensional space utilizing an unequivocal constituent guide also low-position guess through regulated complex regularization. The nearby neighbors of every near descriptor inside the data MR pictures are looked during an obliged spatial extent of the MR pictures among the training dataset. By that point, the pseudo-CT patches are evaluated through k-closest neighbor relapse. The planned procedure designed for pseudo-CT forecast is quantitatively broke downward on top of a dataset comprising of coordinated mind MRI along with CT pictures on or after 13 subjects.
The cognitive radio prototype performance is to alleviate the scarcity of spectral resources for wireless communication through intelligent sensing and quick resource allocation techniques. Secondary users (SU’s) actively obtain the spectrum access opportunity by supporting primary users (PU’s) in cognitive radio networks (CRNs). In present generation, spectrum access is endowed through cooperative communication-based link-level frame-based cooperative (LLC) principle. In this SUs independently act as conveyors for PUs to achieve spectrum access opportunities. Unfortunately, this LLC approach cannot fully exploit spectrum access opportunities to enhance the throughput of CRNs and fails to motivate PUs to join the spectrum sharing processes. Therefore, to overcome this con, network level cooperative (NLC) principle was used, where SUs are integrated mutually to collaborate with PUs session by session, instead of frame based cooperation for spectrum access opportunities. NLC approach has justified the challenges facing in LLC approach. In this paper we make a survey of some models that have been proposed to tackle the problem of LLC. We show the relevant aspects of each model, in order to characterize the parameters that we should take in account to achieve a spectrum access opportunity.
In this paper, the author provides insights and lessons that can be learned from colleagues at American universities about their online education experiences. The literature review and previous studies of online educations gains are explored and summarized in this research. Emerging trends in online education are discussed in detail, and strategies to implement these trends are explained. The author provides several tools and strategies that enable universities to ensure the quality of online education. At the end of this research paper, the researcher provides examples from Arab universities who have successfully implemented online education and expanded their impact on the society. This research provides a strategy and a model that can be used by universities in the Middle East as a roadmap to implement online education in their regions.
Welcome to WIPAC Monthly the magazine brought to you by the LinkedIn Group Water Industry Process Automation & Control.
In this month's edition, along with this month's industry news to celebrate the 13 years since the group was created we have articles including
A case study of the used of Advanced Process Control at the Wastewater Treatment works at Lleida in Spain
A look back on an article on smart wastewater networks in order to see how the industry has measured up in the interim around the adoption of Digital Transformation in the Water Industry.
We have compiled the most important slides from each speaker's presentation. This year’s compilation, available for free, captures the key insights and contributions shared during the DfMAy 2024 conference.
Overview of the fundamental roles in Hydropower generation and the components involved in wider Electrical Engineering.
This paper presents the design and construction of hydroelectric dams from the hydrologist’s survey of the valley before construction, all aspects and involved disciplines, fluid dynamics, structural engineering, generation and mains frequency regulation to the very transmission of power through the network in the United Kingdom.
Author: Robbie Edward Sayers
Collaborators and co editors: Charlie Sims and Connor Healey.
(C) 2024 Robbie E. Sayers
Hierarchical Digital Twin of a Naval Power SystemKerry Sado
A hierarchical digital twin of a Naval DC power system has been developed and experimentally verified. Similar to other state-of-the-art digital twins, this technology creates a digital replica of the physical system executed in real-time or faster, which can modify hardware controls. However, its advantage stems from distributing computational efforts by utilizing a hierarchical structure composed of lower-level digital twin blocks and a higher-level system digital twin. Each digital twin block is associated with a physical subsystem of the hardware and communicates with a singular system digital twin, which creates a system-level response. By extracting information from each level of the hierarchy, power system controls of the hardware were reconfigured autonomously. This hierarchical digital twin development offers several advantages over other digital twins, particularly in the field of naval power systems. The hierarchical structure allows for greater computational efficiency and scalability while the ability to autonomously reconfigure hardware controls offers increased flexibility and responsiveness. The hierarchical decomposition and models utilized were well aligned with the physical twin, as indicated by the maximum deviations between the developed digital twin hierarchy and the hardware.
NUMERICAL SIMULATIONS OF HEAT AND MASS TRANSFER IN CONDENSING HEAT EXCHANGERS...ssuser7dcef0
Power plants release a large amount of water vapor into the
atmosphere through the stack. The flue gas can be a potential
source for obtaining much needed cooling water for a power
plant. If a power plant could recover and reuse a portion of this
moisture, it could reduce its total cooling water intake
requirement. One of the most practical way to recover water
from flue gas is to use a condensing heat exchanger. The power
plant could also recover latent heat due to condensation as well
as sensible heat due to lowering the flue gas exit temperature.
Additionally, harmful acids released from the stack can be
reduced in a condensing heat exchanger by acid condensation. reduced in a condensing heat exchanger by acid condensation.
Condensation of vapors in flue gas is a complicated
phenomenon since heat and mass transfer of water vapor and
various acids simultaneously occur in the presence of noncondensable
gases such as nitrogen and oxygen. Design of a
condenser depends on the knowledge and understanding of the
heat and mass transfer processes. A computer program for
numerical simulations of water (H2O) and sulfuric acid (H2SO4)
condensation in a flue gas condensing heat exchanger was
developed using MATLAB. Governing equations based on
mass and energy balances for the system were derived to
predict variables such as flue gas exit temperature, cooling
water outlet temperature, mole fraction and condensation rates
of water and sulfuric acid vapors. The equations were solved
using an iterative solution technique with calculations of heat
and mass transfer coefficients and physical properties.
HEAP SORT ILLUSTRATED WITH HEAPIFY, BUILD HEAP FOR DYNAMIC ARRAYS.
Heap sort is a comparison-based sorting technique based on Binary Heap data structure. It is similar to the selection sort where we first find the minimum element and place the minimum element at the beginning. Repeat the same process for the remaining elements.
CW RADAR, FMCW RADAR, FMCW ALTIMETER, AND THEIR PARAMETERSveerababupersonal22
It consists of cw radar and fmcw radar ,range measurement,if amplifier and fmcw altimeterThe CW radar operates using continuous wave transmission, while the FMCW radar employs frequency-modulated continuous wave technology. Range measurement is a crucial aspect of radar systems, providing information about the distance to a target. The IF amplifier plays a key role in signal processing, amplifying intermediate frequency signals for further analysis. The FMCW altimeter utilizes frequency-modulated continuous wave technology to accurately measure altitude above a reference point.
Student information management system project report ii.pdfKamal Acharya
Our project explains about the student management. This project mainly explains the various actions related to student details. This project shows some ease in adding, editing and deleting the student details. It also provides a less time consuming process for viewing, adding, editing and deleting the marks of the students.
Hybrid optimization of pumped hydro system and solar- Engr. Abdul-Azeez.pdffxintegritypublishin
Advancements in technology unveil a myriad of electrical and electronic breakthroughs geared towards efficiently harnessing limited resources to meet human energy demands. The optimization of hybrid solar PV panels and pumped hydro energy supply systems plays a pivotal role in utilizing natural resources effectively. This initiative not only benefits humanity but also fosters environmental sustainability. The study investigated the design optimization of these hybrid systems, focusing on understanding solar radiation patterns, identifying geographical influences on solar radiation, formulating a mathematical model for system optimization, and determining the optimal configuration of PV panels and pumped hydro storage. Through a comparative analysis approach and eight weeks of data collection, the study addressed key research questions related to solar radiation patterns and optimal system design. The findings highlighted regions with heightened solar radiation levels, showcasing substantial potential for power generation and emphasizing the system's efficiency. Optimizing system design significantly boosted power generation, promoted renewable energy utilization, and enhanced energy storage capacity. The study underscored the benefits of optimizing hybrid solar PV panels and pumped hydro energy supply systems for sustainable energy usage. Optimizing the design of solar PV panels and pumped hydro energy supply systems as examined across diverse climatic conditions in a developing country, not only enhances power generation but also improves the integration of renewable energy sources and boosts energy storage capacities, particularly beneficial for less economically prosperous regions. Additionally, the study provides valuable insights for advancing energy research in economically viable areas. Recommendations included conducting site-specific assessments, utilizing advanced modeling tools, implementing regular maintenance protocols, and enhancing communication among system components.
2. ISSN: 2502-4752
IJEECS Vol. 7, No. 3, September 2017 : 839 – 845
840
service availability by reducing the downtime to zero. The virtualized environment consists of
different layers such as physical hardware, hypervisor (VMM), Virtual machines (VM), Operating
System and Applications running on top of operating system. The Virtual Machine Monitor
(VMM) also called as hypervisor is liable to suffer failures or hangs due to software aging [1].
The aging issues in these layers are to be monitored and necessary rejuvenation action to be
triggered in order to enhance the service availability and reduce the downtime.
Windows active directory is a necessary service that is used to authenticate the users in the
domain network. Software aging of this service may lead to hazardous consequences like
service downtime which in turn has business impact. Hence, there exists a need to analyze the
software aging patterns in the active directory service for proper estimation of rejuvenation
schedule. In this work, analysis is done on live work environment and results are discussed. The
remaining sections, Section 2 presents related work, Section 3 details the software aging study
followed by Section 4 that discusses about the results and analysis. The conclusion is covered
in Section 5.
2. Related Work
Researchers have employed different techniques for aging detection and rejuvenation.
Previous studies have used measurement-based and model-based rejuvenation approaches.
Measurement based approaches directly monitor system variables which are aging indicators
and predict software aging patterns by analyzing the collected runtime data statistically. Model-
based studies can be distinguished by the type of stochastic process used to model the
phenomenon such as Markov Chains and Petri Nets.
Alonso et al., [2] evaluated machine learning algorithms such as decision trees, K-
nearest neighbor, and Random forest using the R statistical language for aging prediction. The
researchers used different sets of values when the ML algorithm had parameters to create
different configurations of the same algorithm. The results indicated Random Forest performs
better than the rest of the models. Toshiaki Hayashi et al., [3] estimated the performance
degradation by passively measuring the traffic exchanged by virtual machines. The authors
have justified the selection of traffic characteristics as a performance information source by
citing several advantages. This data along with the recorded traffic metrics was tested with the
C4.5 machine learning classifier that constructed a decision tree to identify performance. This is
a non-intrusive method of metrics collection as the traffic measurement is done on separate
machine that is not a part of virtual environment. This facilitates the metrics extraction even
under extreme performance degradation.
Jing Liu et al., [4] proposed an adaptive failure detection method. The parameters
chosen for aging detection are CPU and memory usage; the delay in transmission of packet
between service components and aging failure detection module. The collected metrics are
encapsulated into a packet and sent to the aging detector module. The procedure used
achieves two tasks, packet arrival time and the information it carries. Some failure probability is
expected if the message does not arrive on time. The CPU usage and free memory available is
used to detect the aging severity. The aging severity is divided into four levels i.e., from L1 to
L4. Aging Degree Evaluator module inserts it into centralized failure event queue. Based on
aging degree number, this queue is ordered and the top events will be rejuvenated inevitably.
The research work of Yongquan Yan [5] indicates the significance of choosing the
proper data set. The researcher proved that choosing the proper data set is more important than
the method used to analyze the collected metrics. The work compares the resource utilization of
a webserver that is not subjected to artificial load, but a true load which has aging patterns using
linear and nonlinear methods. Lei Cui et al [6] concentrated their work on finding the impact of
software aging defect on virtual machine and physical machines. The aging rate in both the
forms was calculated and compared. The outcome of this study indicate aging effects are more
in virtual machines than physical machines which was caused due to aging effects in code of
hypervisor or depletion due additional calls in VMM layer. The outcome of the study of the
existing techniques is that there is a scope for research in finding non-intrusive, platform
independent and more accurate aging prediction techniques.
3. IJEECS ISSN: 2502-4752
Software Aging Forecasting Using Time Series Model (I. M. Umesh)
841
3. The Proposed Model for Software Aging Forecasting
3.1 Data Collection
In order to find whether the aging patterns exist in long running applications, the study is
conducted on long running service Windows Active Directory. For this study, an institute with an
approximate users strength of six thousand is chosen. It is necessary to monitor multiple metrics
that reflect broader utilization of the resources. Two metrics are considered for this study i.e.,
CPU usage and memory availability which are aging indicators. The metrics are collected for a
period of six months and analysis is done. The metrics and justification for choosing these aging
indicators is given in Table 1.
Table 1. Aging Indicators
Aging Indicators Description
CPU usage System-wide aging indicator that updates CPU load in percentage
Memory avaibility System-wide aging indicator that updates available memory in percentage
To capture the data, a Network Monitoring tool called PRTG (Paessler Router Traffic Grapher)
is used. Figure 1 & 2 depict the workload graphs during the data collection.
Figure 1. CPU workload graph during the metrics collection
Figure 2. Memory Availability graph during the metrics collection
It is necessary to collect performance metrics without affecting the performance of
server at run time. The collected metrics indicate the resource consumption of performance
related parameters and hence non-intrusive approach has been used as the measurement
program does not affect the hardware or software functionality and load. The PRTG tool
sensors use the programming interfaces of each device wherever possible. This means the
administrator does not have to install additional client applications or agents on each device,
thus simplifying and accelerating the setup and keeping the devices free of additional
performance overhead. To test the overhead of running PRTG, resource consumption was
4. ISSN: 2502-4752
IJEECS Vol. 7, No. 3, September 2017 : 839 – 845
842
observed from zero workload machine. The graph indicates the negligible overhead of metrics
collector on performance of the application.
Figure 3. Performance overhead of metrics collector
3.2. Statistical Analysis of Collected Data
Statistical analysis of software aging helps in collecting and scrutinizing collected
metrics and finding software aging indicators. The presented analysis considers a period of six
months. The monitoring interval was of two hours. Figure 4 has the results of CPU usage
percentage of virtual machine that runs active directory. The Figure 5 depicts the results of
Memory consumption. The plots also contain an approximated linear function of CPU usage
behavior.
Software aging data analysis generally uses Mann-Kendall test to evaluate trend in the
data [7-9]. The Mann-Kendall test [10] checks the null hypothesis, H0, which shows that there is
no trend in the data during the time, against the alternative hypothesis, H1, which indicates an
upward or a downward monotonic trend in the data. As software aging is a cumulative process,
the Mann-Kendall test can be used to reveal trends of software internal state degradation.
Among Mann-Kendall tests results we have the Z-value which is used to accept or reject the null
hypothesis. Z-value close to zero suggests no trend in the data; a high absolute value indicates
the existence of a trend. Figure 4 and Figure 5 indicate the CPU usage trend and memory
usage trend respectively.
Figure 4. CPU Usage Trend
Figure 5. Memory Consumption Trend
5. IJEECS ISSN: 2502-4752
Software Aging Forecasting Using Time Series Model (I. M. Umesh)
843
The Table 2 presents the results of statistical analysis made in the data. The results
show Mann-Kendall Z-value is high than zero. Therefore, it is possible to reject the null
hypothesis (no trend in the data). The positive value indicates an upward monotonic trend in the
data. To calculate the slope of the monotonic trend, we used the Sen method [11]. The Table 2
also presents the estimated slope and the 95% confidence interval for this slope.
Table 2. Stastical Analysis
Parameter Memory Caonsumption CPU Usage
Mann-Kendali Z-value 70.4 46.2
Estimated slope 550.9424 KB/2h 0.0009 %CPU/2h
95% confidence interval (550.8521 KB/2h, 551.0322 KB/2h) (0.0009 %CPU/2h, 0.001 %CPU/2h)
3.3 Prediction using Time Series Forecasting
This section discusses the proposed software aging forecasting model. To prevent the
system crash, it is necessary to predict resource exhaustion time. The CPU usage metrics has
been tabulated in the Table 3. The data from different time slots of a day that were captured are
entered in a table. This is because number of users (load) varies at different time slots. Average
values of the CPU consumption percentage are tabulated. Based on this, moving average of
four slots is calculated. Moving average (MA) is a calculation to analyze data points by creating
series of averages of different subsets of the full data set. In this scenario, MA is calculated on
average value of CPU usage metric.
Table 3. Predictions using time series
Time Day
Time
Slot
CPU
Usage
(%)
Moving
Average
Center
Moving
Average
St, It St
Deseas-
onalized
data
Tt Predictions
1 Day
1
1 12 0.30 40.39 45.25 13.44
2 2 70 1.44 48.58 46.38 66.84
3 3 63 48.25 48.625 1.30 1.22 51.76 47.52 57.83
4 4 48 49 50 0.96 1.03 46.77 48.65 49.93
5 Day
2
1 15 51 51.25 0.29 0.30 50.49 49.78 14.79
6 2 78 51.5 52 1.50 1.44 54.13 50.92 73.37
7 3 65 52.5 52.625 1.24 1.22 53.40 52.05 63.35
8 4 52 52.75 53 0.98 1.03 50.67 53.18 54.58
9 Day
3
1 16 53.25 53.5 0.30 0.30 53.85 54.32 16.14
10 2 80 53.75 54.75 1.46 1.44 55.52 55.45 79.90
11 3 67 55.75 55.875 1.20 1.22 55.05 56.58 68.87
12 4 60 56 56 1.07 1.03 58.46 57.72 59.23
13 Day
4
1 17 56 56.75 0.30 0.30 57.22 58.85 17.48
14 2 80 57.5 58.75 1.36 1.44 55.52 59.98 86.43
15 3 73 60 1.22 59.98 61.12 74.39
16 4 70 1.03 68.21 62.25 63.89
17 Day
5
1 0.30 63.38 18.83
18 2 1.44 64.52 92.97
19 3 1.22 65.65 79.91
20 4 1.03 66.78 68.54
The moving average is taken from an equal number of data on either side of a central
value. This ensures that variations in the mean are aligned with the variations in the data rather
than being shifted in time. Once the values are smoothed using moving average method, it can
be further smoothed by center moving average method, a special procedure applied when the
number of seasons is even. The relevant graph is shown in Figure 6.
6. ISSN: 2502-4752
IJEECS Vol. 7, No. 3, September 2017 : 839 – 845
844
Figure 6. Graph indicating CPU usage and center moving average
The seasonal component St and irregular component It are obtained by dividing actual
data by Centre Moving Average. By doing this we obtained the seasonality and irregularity of
the data. The value 1.30 means, in the day 1, third time slot, seasonality and irregularity
component is 30% above the base line data. The value 0.29 mentioned in fourth time slot of day
1 indicate that the seasonality and irregularity component is 71% below the baseline data. The
next step is to get rid of irregularity component, It. This is done by averaging time slots of each
day. The values generated are 0.30, 1.44, 1.22 and 1.03 for our data. This means during third
time slot of day 1, the seasonal data is 22% higher than the baseline data. The irregularity has
been removed by using this method. The next step is to deseasonalize the data. This can be
achieved by dividing the time series data with seasonal component. So far, we have removed
the irregularities and deseasonalized the data. The next step is to find the trend component. In
order to get trend component, Simple Linear Regression is to be performed using
deseasonlized data as Y variable and time as X variable. The Simple Linear Regression and
Co-efficients, Y Intercept and Slope obtained are 44.11 and 1.33 respectively. Now, the trend
component can be obtained by using the formula,
Tt = Y variable + X variable * time component (1)
The final values, predictions can be obtained by multiplying seasonal component with trend
component. The summary data is presented in the Table 4.
Table 4. Summary Data
Regression Statistics
Multiple R 0.876714
R Square 0.768627
Adjusted R Square 0.7521
Standard Error 3.064644
Observations 16
4. Results and Discussions
In order to evaluate the accuracy of the proposed technique, forecasting of the values is
done for already collected metrics. The results indicate that the predicted values are accurate.
The Figure 7 indicate graph that depicts the comparison of actual values and predicted values.
Figure 7. Comparison of actual values with predicted values
7. IJEECS ISSN: 2502-4752
Software Aging Forecasting Using Time Series Model (I. M. Umesh)
845
The predictions done here are for one day. The same technique can be used for
forecasting the values of aging indicators for upcoming week or a month. The aging indicators
taken here are CPU usage percentage and memory availability percentage. The other
significant parameter to be used here is the threshold value of aging indicator. i,e, maximum
load tolerable by the system. The threshold value can be obtained by studying the history of the
service like when the system was crashed, the reason for failure and what were the value of
parameters. Once the threshold values are identified, the weights are assigned to these
parameters depending on whether the system is processor intensive application or memory
intensive application. By varying the weights and the threshold, we can get different models of
decision-making. The inputs to the model are x1 and x2 that indicate CPU usage and memory
availability which are aging indicators. The weights w1 and w2 are assigned depending on the
type of application. Depending on whether the weighted sum ∑jwjxj is less than or greater than
some predetermined value, rejuvenation can be triggered.
5. Conclusion
The proposed forecasting technique can be a potential candidate for software aging
prediction. As more and more educational institutes are using private cloud services as a part of
their IT infrastructure, ensuring the continuous service delivery is important. This work helps in
having better rejuvenation schedules thus avoiding downtime.
References
[1] Melo Matheus, Paulo Maciel, Jean Araujo, Rubens Matos, and Carlos Araujo. Availability study on
cloud computing environments: Live migration as a rejuvenation mechanism. Proc. 43rd Annual
IEEE/IFIP International Conference on Dependable Systems and Networks. 2013, 1-6.
[2] Alonso, J. Belanche, L. Avresky, R. Predicting Software Anomalies Using Machine Learning
Techniques. Proc 10th IEEE International Symposium on Network Computing and Applications
(NCA), 2011, 163-170.
[3] T. Hayashi and S. Ohta. Performance Degradation of Virtual Machines via Passive Measurement
and Machine Learning. International Journal of Adaptive, Resilient and automates systems, 2014,
page. 40-56.
[4] Jing Liu, Jiantao Zhou, Rajkumar Buyya. Software Rejuvenation based Fault Tolerance Scheme for
Cloud Applications. Proc IEEE 8
th
International Conference on Cloud Computing. 2015, 1115-1118.
[5] Yongquan Yan. A Practice Guide of Predicting Resource Consumption in a Web Server.Review of
Computer Engineering Studies, 2015: pp.1-8.
[6] Lei Cui, Bo Li, Jianxin Li, James Hardy, and Lu Liu. Software Aging in Virtualized Environments:
Detection and Prediction. Proc. International Conference on Parallel and Distributed Systems, IEEE,
2012, 718-719.
[7] Grottke, M., Li, L., Vaidyanathan, K., & Trivedi, K. S. Analysis of software aging in a web server.
IEEE Transactions on reliability. 2006,55(3), 411-420.
[8] Garg, S., Van Moorsel, A., Vaidyanathan, K., & Trivedi, K. S. A methodology for detection and
estimation of software aging, Proc. Ninth International Symposium on Software Reliability
Engineering. IEEE. 1998, 283-292.
[9] Machida, F., Andrzejak, A., Matias, R., & Vicente, E. On the effectiveness of Mann-Kendall test for
detection of software aging. Proc. IEEE International Symposium on Software Reliability Engineering
Workshops. 2013, 269-274.
[10] Mann, H. B. Nonparametric tests against trend, Econometrica: Journal of the Econometric Society.
1945, 245-259.
[11] Sen, P. K. Estimates of the regression coefficient based on Kendall's tau. Journal of the American
Statistical Association, 1968, 63(324), 1379-1389.
[12] Xiaozhi Du, Huimin Lu, Gang Liu, Software Aging Prediction based on Extreme Learning Machine,
TELKOMNIKA. 2013; 11(11): 6547-6555.
[13] Ferdy Nirwansyah, Suharjito, Hybrid Disk Drive Configuration on Database Server Virtualization,
Indonesian Journal of Electrical Engineering and Computer Science. 2016; 2( 3): 720 – 728.