A new approach for the control and prediction of verification activities for large safety-relevant software
systems will be presented in this paper. The model is applied on a macroscopic system level and based on
so-called Moran processes, which originate from mathematical biology and allow for the description
ofphenomena as, for instance, genetic drift. Beside the theoretical foundations of this novel approach, its
application on a real-world example from the medical engineering domain will be discussed.
BIO-INSPIRED MODELLING OF SOFTWARE VERIFICATION BY MODIFIED MORAN PROCESSESIJCSEA Journal
A new approach for the control and prediction of verification activities for large safety-relevant software
systems will be presented in this paper. The model is applied on a macroscopic system level and based on
so-called Moran processes, which originate from mathematical biology and allow for the description
ofphenomena as, for instance, genetic drift. Beside the theoretical foundations of this novel approach, its
application on a real-world example from the medical engineering domain will be discussed.
A Novel Approach to Derive the Average-Case Behavior of Distributed Embedded ...ijccmsjournal
Monte-Carlo simulation is widely used in distributed embedded system in our present era. In this
research work, we have put an emphasis on reliability assessment of any distributed embedded system
through Monte-Carlo simulation. We have done this assessment on random data which represents input
voltages ranging from 0 volt to 12 volt; several numbers of trials have been executed on those data to
check the average case behavior of a distributed real time embedded system. From the experimental result, a saturation point has been achieved against the time behavior which shows the average case behavior of the concerned distributed embedded system.
Assessing Software Reliability Using SPC – An Order Statistics Approach IJCSEA Journal
There are many software reliability models that are based on the times of occurrences of errors in the debugging of software. It is shown that it is possible to do asymptotic likelihood inference for software reliability models based on order statistics or Non-Homogeneous Poisson Processes (NHPP), with asymptotic confidence levels for interval estimates of parameters. In particular, interval estimates from these models are obtained for the conditional failure rate of the software, given the data from the debugging process. The data can be grouped or ungrouped. For someone making a decision about when to market software, the conditional failure rate is an important parameter. Order statistics are used in a wide variety of practical situations. Their use in characterization problems, detection of outliers, linear estimation, study of system reliability, life-testing, survival analysis, data compression and many other fields can be seen from the many books. Statistical Process Control (SPC) can monitor the forecasting of software failure and thereby contribute significantly to the improvement of software reliability. Control charts are widely used for software process control in the software industry. In this paper we proposed a control mechanism based on order statistics of cumulative quantity between observations of time domain
failure data using mean value function of Half Logistics Distribution (HLD) based on NHPP.
Assessing Software Reliability Using SPC – An Order Statistics ApproachIJCSEA Journal
There are many software reliability models that are based on the times of occurrences of errors in the debugging of software. It is shown that it is possible to do asymptotic likelihood inference for software reliability models based on order statistics or Non-Homogeneous Poisson Processes (NHPP), with asymptotic confidence levels for interval estimates of parameters. In particular, interval estimates from these models are obtained for the conditional failure rate of the software, given the data from the debugging process. The data can be grouped or ungrouped. For someone making a decision about when to market software, the conditional failure rate is an important parameter. Order statistics are used in a wide variety of practical situations. Their use in characterization problems, detection of outliers, linear estimation, study of system reliability, life-testing, survival analysis, data compression and many other fields can be seen from the many books. Statistical Process Control (SPC) can monitor the forecasting of software failure and thereby contribute significantly to the improvement of software reliability. Control charts are widely used for software process control in the software industry. In this paper we proposed a control mechanism based on order statistics of cumulative quantity between observations of time domain
failure data using mean value function of Half Logistics Distribution (HLD) based on NHPP.
Project Risk management is an integral part for business survival. This research paper focuses on determining project risk factors using genetic algorithm and fuzzy logic base on the demerits of conventional approaches. Genetic algorithm help optimise the parameters data items while fuzzy logic handle imprecisions. Unified Modelling Language was utilized for modelling the software system, depicting clearly the interaction between various components and the dynamic aspect of the system. This paper demonstrates the practical application of metric based soft computing techniques in the health sector in determining patient’s satisfaction
IJCER (www.ijceronline.com) International Journal of computational Engineerin...ijceronline
Call for paper 2012, hard copy of Certificate, research paper publishing, where to publish research paper,
journal publishing, how to publish research paper, Call For research paper, international journal, publishing a paper, IJCER, journal of science and technology, how to get a research paper published, publishing a paper, publishing of journal, publishing of research paper, research and review articles, IJCER Journal, How to publish your research paper, publish research paper, open access engineering journal, Engineering journal, Mathematics journal, Physics journal, Chemistry journal, Computer Engineering, Computer Science journal, how to submit your paper, peer review journal, indexed journal, research and review articles, engineering journal, www.ijceronline.com, research journals,
yahoo journals, bing journals, International Journal of Computational Engineering Research, Google journals, hard copy of Certificate,
journal of engineering, online Submission
STATE-OF-THE-ART IN EMPIRICAL VALIDATION OF SOFTWARE METRICS FOR FAULT PRONEN...IJCSES Journal
With the sharp rise in software dependability and failure cost, high quality has been in great demand.However, guaranteeing high quality in software systems which have grown in size and complexity coupled with the constraints imposed on their development has become increasingly difficult, time and resource consuming activity. Consequently, it becomes inevitable to deliver software that have no serious faults. In
this case, object-oriented (OO) products being the de facto standard of software development with their unique features could have some faults that are hard to find or pinpoint the impacts of changes. The earlier faults are identified, found and fixed, the lesser the costs and the higher the quality. To assess product quality, software metrics are used. Many OO metrics have been proposed and developed. Furthermore,
many empirical studies have validated metrics and class fault proneness (FP) relationship. The challenge is which metrics are related to class FP and what activities are performed. Therefore, this study bring together the state-of-the-art in fault prediction of FP that utilizes CK and size metrics. We conducted a systematic literature review over relevant published empirical validation articles. The results obtained are
analysed and presented. It indicates that 29 relevant empirical studies exist and measures such as complexity, coupling and size were found to be strongly related to FP.
BIO-INSPIRED MODELLING OF SOFTWARE VERIFICATION BY MODIFIED MORAN PROCESSESIJCSEA Journal
A new approach for the control and prediction of verification activities for large safety-relevant software
systems will be presented in this paper. The model is applied on a macroscopic system level and based on
so-called Moran processes, which originate from mathematical biology and allow for the description
ofphenomena as, for instance, genetic drift. Beside the theoretical foundations of this novel approach, its
application on a real-world example from the medical engineering domain will be discussed.
A Novel Approach to Derive the Average-Case Behavior of Distributed Embedded ...ijccmsjournal
Monte-Carlo simulation is widely used in distributed embedded system in our present era. In this
research work, we have put an emphasis on reliability assessment of any distributed embedded system
through Monte-Carlo simulation. We have done this assessment on random data which represents input
voltages ranging from 0 volt to 12 volt; several numbers of trials have been executed on those data to
check the average case behavior of a distributed real time embedded system. From the experimental result, a saturation point has been achieved against the time behavior which shows the average case behavior of the concerned distributed embedded system.
Assessing Software Reliability Using SPC – An Order Statistics Approach IJCSEA Journal
There are many software reliability models that are based on the times of occurrences of errors in the debugging of software. It is shown that it is possible to do asymptotic likelihood inference for software reliability models based on order statistics or Non-Homogeneous Poisson Processes (NHPP), with asymptotic confidence levels for interval estimates of parameters. In particular, interval estimates from these models are obtained for the conditional failure rate of the software, given the data from the debugging process. The data can be grouped or ungrouped. For someone making a decision about when to market software, the conditional failure rate is an important parameter. Order statistics are used in a wide variety of practical situations. Their use in characterization problems, detection of outliers, linear estimation, study of system reliability, life-testing, survival analysis, data compression and many other fields can be seen from the many books. Statistical Process Control (SPC) can monitor the forecasting of software failure and thereby contribute significantly to the improvement of software reliability. Control charts are widely used for software process control in the software industry. In this paper we proposed a control mechanism based on order statistics of cumulative quantity between observations of time domain
failure data using mean value function of Half Logistics Distribution (HLD) based on NHPP.
Assessing Software Reliability Using SPC – An Order Statistics ApproachIJCSEA Journal
There are many software reliability models that are based on the times of occurrences of errors in the debugging of software. It is shown that it is possible to do asymptotic likelihood inference for software reliability models based on order statistics or Non-Homogeneous Poisson Processes (NHPP), with asymptotic confidence levels for interval estimates of parameters. In particular, interval estimates from these models are obtained for the conditional failure rate of the software, given the data from the debugging process. The data can be grouped or ungrouped. For someone making a decision about when to market software, the conditional failure rate is an important parameter. Order statistics are used in a wide variety of practical situations. Their use in characterization problems, detection of outliers, linear estimation, study of system reliability, life-testing, survival analysis, data compression and many other fields can be seen from the many books. Statistical Process Control (SPC) can monitor the forecasting of software failure and thereby contribute significantly to the improvement of software reliability. Control charts are widely used for software process control in the software industry. In this paper we proposed a control mechanism based on order statistics of cumulative quantity between observations of time domain
failure data using mean value function of Half Logistics Distribution (HLD) based on NHPP.
Project Risk management is an integral part for business survival. This research paper focuses on determining project risk factors using genetic algorithm and fuzzy logic base on the demerits of conventional approaches. Genetic algorithm help optimise the parameters data items while fuzzy logic handle imprecisions. Unified Modelling Language was utilized for modelling the software system, depicting clearly the interaction between various components and the dynamic aspect of the system. This paper demonstrates the practical application of metric based soft computing techniques in the health sector in determining patient’s satisfaction
IJCER (www.ijceronline.com) International Journal of computational Engineerin...ijceronline
Call for paper 2012, hard copy of Certificate, research paper publishing, where to publish research paper,
journal publishing, how to publish research paper, Call For research paper, international journal, publishing a paper, IJCER, journal of science and technology, how to get a research paper published, publishing a paper, publishing of journal, publishing of research paper, research and review articles, IJCER Journal, How to publish your research paper, publish research paper, open access engineering journal, Engineering journal, Mathematics journal, Physics journal, Chemistry journal, Computer Engineering, Computer Science journal, how to submit your paper, peer review journal, indexed journal, research and review articles, engineering journal, www.ijceronline.com, research journals,
yahoo journals, bing journals, International Journal of Computational Engineering Research, Google journals, hard copy of Certificate,
journal of engineering, online Submission
STATE-OF-THE-ART IN EMPIRICAL VALIDATION OF SOFTWARE METRICS FOR FAULT PRONEN...IJCSES Journal
With the sharp rise in software dependability and failure cost, high quality has been in great demand.However, guaranteeing high quality in software systems which have grown in size and complexity coupled with the constraints imposed on their development has become increasingly difficult, time and resource consuming activity. Consequently, it becomes inevitable to deliver software that have no serious faults. In
this case, object-oriented (OO) products being the de facto standard of software development with their unique features could have some faults that are hard to find or pinpoint the impacts of changes. The earlier faults are identified, found and fixed, the lesser the costs and the higher the quality. To assess product quality, software metrics are used. Many OO metrics have been proposed and developed. Furthermore,
many empirical studies have validated metrics and class fault proneness (FP) relationship. The challenge is which metrics are related to class FP and what activities are performed. Therefore, this study bring together the state-of-the-art in fault prediction of FP that utilizes CK and size metrics. We conducted a systematic literature review over relevant published empirical validation articles. The results obtained are
analysed and presented. It indicates that 29 relevant empirical studies exist and measures such as complexity, coupling and size were found to be strongly related to FP.
IoT Device Intelligence & Real Time Anomaly DetectionBraja Krishna Das
-- Real Time Anomaly Detection
-- IoT Device Intelligence
-- Uni Variate and Multi Variate Anomaly Detection
-- Unsupervised Learning Classification from Anomaly Detection
Formal method techniques provides a suitable platform for the software development in software systems.
Formal methods and formal verification is necessary to prove the correctness and improve performance of
software systems in various levels of design and implementation, too. Security Discussion is an important
issue in computer systems. Since the antivirus applications have very important role in computer systems
security, verifying these applications is very essential and necessary. In this paper, we present four new
approaches for antivirus system behavior and a behavioral model of protection services in the antivirus
system is proposed. We divided the behavioral model in to preventive behavior and control behavior and
then we formal these behaviors. Finally by using some definitions we explain the way these behaviors are
mapped on each other by using our new approaches.
A SECURITY EVALUATION FRAMEWORK FOR U.K. E-GOVERNMENT SERVICES AGILE SOFTWARE...IJNSA Journal
This study examines the traditional approach to software development within the United Kingdom Government and the accreditation process. Initially we look at the Waterfall methodology that has been used for several years. We discuss the pros and cons of Waterfall before moving onto the Agile Scrum methodology. Agile has been adopted by the majority of Government digital departments including the Government Digital Services. Agile, despite its ability to achieve high rates of productivity organized in short, flexible, iterations, has faced security professionals’ disbelief when working within the U.K. Government. One of the major issues is that we develop in Agile but the accreditation process is conducted using Waterfall resulting in delays to go live dates. Taking a brief look into the accreditation process that is used within Government for I.T. systems and applications, we focus on giving the accreditor the assurance they need when developing new applications and systems. A framework has been produced by utilising the Open Web Application Security Project’s (OWASP) Application Security Verification Standard (ASVS). This framework will allow security and Agile to work side by side and produce secure code.
Proposed Algorithm for Surveillance ApplicationsEditor IJCATR
Technological systems are vulnerable to faults. In many fault situations, the system operation has to be stopped to avoid
damage to machinery and humans. As a consequence, the detection and the handling of faults play an increasing role in modern
technology, where many highly automated components interact in a complex way such that a fault in a single component may cause
the malfunction of the whole system. This work introduces the main ideas of fault diagnosis and fault-tolerant control under the optics
of various research work done in this area. It presents the Arduino technology in both hardware and software sides. The purpose of this
paper is to propose a diagnostic algorithm based on this technology. A case study is proposed for this setting. Moreover, we explained
and discussed the result of our algorithm.
Formal Verification of Distributed Checkpointing Using Event-Bijcsit
The development of complex system makes challenging task for correct software development. Due to faulty
specification, software may involve errors. The traditional testing methods are not sufficient to verify the
correctness of such complex system. In order to capture correct system requirements and rigorous
reasoning about the problems, formal methods are required. Formal methods are mathematical techniques
that provide precise specification of problems with their solutions and proof of correctness. In this paper,
we have done formal verification of check pointing process in a distributed database system using Event B.
Event-B is an event driven formal method which is used to develop formal models of distributed database
systems. In a distributed database system, the database is stored at different sites that are connected
together through the network. Checkpoint is a recovery point which contains the state information about
the site. In order to do recovery of a distributed transaction a global checkpoint number (GCPN) is
required. A global checkpoint number decides which transaction will be included for recovery purpose. All
transactions whose timestamp are less than global checkpoint number will be marked as before checkpoint
transaction (BCPT) and will be considered for recovery purpose. The transactions whose timestamp are
greater than GCPN will be marked as after checkpoint transaction (ACPT) and will be part of next global
checkpoint number.
Practical Guidelines to Improve Defect Prediction Model – A Reviewinventionjournals
Defect prediction models are used to pinpoint risky software modules and understand past pitfalls that lead to defective modules. The predictions and insights that are derived from defect prediction models may not be accurate and reliable if researchers do not consider the impact of experimental components (e.g., datasets, metrics, and classifiers) of defect prediction modeling. Therefore, a lack of awareness and practical guidelines from previous research can lead to invalid predictions and unreliable insights. Through case studies of systems that span both proprietary and open-source domains, find that (1) noise in defect datasets; (2) parameter settings of classification techniques; and (3) model validation techniques have a large impact on the predictions and insights of defect prediction models, suggesting that researchers should carefully select experimental components in order to produce more accurate and reliable defect prediction models.
Measurement and Evaluation of Reliability, Availability and Maintainability o...IOSR Journals
The growing complexity of equipments and systems often lead to failures and as a consequence the
aspects of reliability, maintainability and availability have come into forefront. The failure of machineries and
equipments causes disruption in production resulting from a loss of availability of the system and also increases
the cost of maintenance. The present study deals with the determination of reliability and availability aspects of
one of the significant constituent in a Railway Diesel Locomotive Engine. In order to assess the availability
performance of these components, a broad set of studies has been carried out to gather accurate information at
the level of detail considered suitable to meet the availability analysis target. The Reliability analysis is
performed using the Weibull Distribution and the various data plots as well as failure rate information help in
achieving results that may be utilized in the near future by the Railway Locomotive Engines for reducing the
unexpected breakdowns and will enhance the reliability and availability of the Engine. In this work, ABC
analysis has been used for the maintenance of spare parts inventory. Here, Power pack assemblies, Engine
System are used to focus on the reliability, maintainability and availability aspects
A model for run time software architecture adaptationijseajournal
Since the global demand for software systems and constantly changing environments and systems is
increasing, the adaptability of software systems is of significant importance. Due to the architecture of
software system is a high-level view of the system and makes the modifiability possible at an overall level,
the adaptability of the software can be considered an effective approach to adapt software systems by
changing architecture configuration. In this study, the architecture configuration is modified through xADL
language which is a software architecture description language with a high flexibility. Software
architecture reconfiguration is done based on existing rules of rule-based system, which are written with
respect to three strategies of load balancing, fixed bandwidth and fixed latency. The proposed model of the
study is simulated based on samples of client-server system, video conferencing system and students’
grading system. The proposed model can be used in all types of architecture, include Client Server
Architecture, Service Oriented Architecture and etc.
Software testing effort estimation with cobb douglas function a practical app...eSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
Software testing effort estimation with cobb douglas function- a practical ap...eSAT Journals
Abstract Effort estimation is one of the critical challenges in Software Testing Life Cycle (STLC). It is the basis for the project’s effort estimation, planning, scheduling and budget planning. This paper illustrates model with an objective to depict the accuracy and bias variation of an organization’s estimates of software testing effort through Cobb-Douglas function (CDF). Data variables selected for building the model were believed to be vital and have significant impact on the accuracy of estimates. Data gathered for the completed projects in the organization for about 13 releases. Statistically, all variables in this model were statistically significant at p<0.05><0.01 levels. The Cobb-Douglas function was selected and used for the software testing effort estimation. The results achieved with CDF were compared with the estimates provided by the area expert. The model’s estimation figures are more accurate than the expert judgment. CDF has one of the appropriate techniques for estimating effort for software testing. CDF model accuracy is 93.42%.
SYSTEM IDENTIFICATION AND MODELING FOR INTERACTING AND NON-INTERACTING TANK S...ijistjournal
System identification from the experimental data plays a vital role for model based controller design. Derivation of process model from first principles is often difficult due to its complexity. The first stage in the development of any control and monitoring system is the identification and modeling of the system. Each model is developed within the context of a specific control problem. Thus, the need for a general system identification framework is warranted. The proposed framework should be able to adapt and emphasize different properties based on the control objective and the nature of the behavior of the system. Therefore, system identification has been a valuable tool in identifying the model of the system based on the input and output data for the design of the controller. The present work is concerned with the identification of transfer function models using statistical model identification, process reaction curve method, ARX model, genetic algorithm and modeling using neural network and fuzzy logic for interacting and non interacting tank process. The identification technique and modeling used is prone to parameter change & disturbance. The proposed methods are used for identifying the mathematical model and intelligent model of interacting and non interacting process from the real time experimental data.
SYSTEM IDENTIFICATION AND MODELING FOR INTERACTING AND NON-INTERACTING TANK S...ijistjournal
System identification from the experimental data plays a vital role for model based controller design. Derivation of process model from first principles is often difficult due to its complexity. The first stage in the development of any control and monitoring system is the identification and modeling of the system. Each model is developed within the context of a specific control problem. Thus, the need for a general system identification framework is warranted. The proposed framework should be able to adapt and emphasize different properties based on the control objective and the nature of the behavior of the system. Therefore, system identification has been a valuable tool in identifying the model of the system based on the input and output data for the design of the controller. The present work is concerned with the identification of transfer function models using statistical model identification, process reaction curve method, ARX model, genetic algorithm and modeling using neural network and fuzzy logic for interacting and non interacting tank process. The identification technique and modeling used is prone to parameter change & disturbance. The proposed methods are used for identifying the mathematical model and intelligent model of interacting and non interacting process from the real time experimental data.
ESTIMATING HANDLING TIME OF SOFTWARE DEFECTScsandit
The problem of accurately predicting handling time for software defects is of great practical
importance. However, it is difficult to suggest a practical generic algorithm for such estimates,
due in part to the limited information available when opening a defect and the lack of a uniform
standard for defect structure. We suggest an algorithm to address these challenges that is
implementable over different defect management tools. Our algorithm uses machine learning
regression techniques to predict the handling time of defects based on past behaviour of similar
defects. The algorithm relies only on a minimal set of assumptions about the structure of the
input data. We show how an implementation of this algorithm predicts defect handling time with
promising accuracy results
With the emergence of virtualization and cloud computing technologies, several services are housed on virtualization platform. Virtualization is the technology that many cloud service providers rely on for efficient management and coordination of the resource pool. As essential services are also housed on cloud platform, it is necessary to ensure continuous availability by implementing all necessary measures. Windows Active Directory is one such service that Microsoft developed for Windows domain networks. It is included in Windows Server operating systems as a set of processes and services for authentication and authorization of users and computers in a Windows domain type network. The service is required to run continuously without downtime. As a result, there are chances of accumulation of errors or garbage leading to software aging which in turn may lead to system failure and associated consequences. This results in software aging. In this work, software aging patterns of Windows active directory service is studied. Software aging of active directory needs to be predicted properly so that rejuvenation can be triggered to ensure continuous service delivery. In order to predict the accurate time, a model that uses time series forecasting technique is built.
The adoption of cloud environment for various application uses has led to security and privacy concern of user’s data. To protect user data and privacy on such platform is an area of concern.
Many cryptography strategy has been presented to provide secure sharing of resource on cloud platform. These methods tries to achieve a secure authentication strategy to realize feature such as self-blindable access tickets, group signatures, anonymous access tickets, minimal disclosure of tickets and revocation but each one varies in realization of these features. Each feature requires different cryptography mechanism for realization. Due to this it induces computation complexity which affects the deployment of these models in practical application. Most of these techniques are designed for a particular application environment and adopt public key cryptography which incurs high cost due to computation complexity.
To address these issues this work present an secure and efficient privacy preserving of mining data on public cloud platform by adopting party and key based authentication strategy. The proposed SCPPDM (Secure Cloud Privacy Preserving Data Mining) is deployed on Microsoft azure cloud platform. Experiment is conducted to evaluate computation complexity. The outcome shows the proposed model achieves significant performance interm of computation overhead and cost.
Unlock TikTok Success with Sociocosmos..SocioCosmos
Discover how Sociocosmos can boost your TikTok presence with real followers and engagement. Achieve your social media goals today!
https://www.sociocosmos.com/product-category/tiktok/
More Related Content
Similar to BIO-INSPIRED MODELLING OF SOFTWARE VERIFICATION BY MODIFIED MORAN PROCESSES
IoT Device Intelligence & Real Time Anomaly DetectionBraja Krishna Das
-- Real Time Anomaly Detection
-- IoT Device Intelligence
-- Uni Variate and Multi Variate Anomaly Detection
-- Unsupervised Learning Classification from Anomaly Detection
Formal method techniques provides a suitable platform for the software development in software systems.
Formal methods and formal verification is necessary to prove the correctness and improve performance of
software systems in various levels of design and implementation, too. Security Discussion is an important
issue in computer systems. Since the antivirus applications have very important role in computer systems
security, verifying these applications is very essential and necessary. In this paper, we present four new
approaches for antivirus system behavior and a behavioral model of protection services in the antivirus
system is proposed. We divided the behavioral model in to preventive behavior and control behavior and
then we formal these behaviors. Finally by using some definitions we explain the way these behaviors are
mapped on each other by using our new approaches.
A SECURITY EVALUATION FRAMEWORK FOR U.K. E-GOVERNMENT SERVICES AGILE SOFTWARE...IJNSA Journal
This study examines the traditional approach to software development within the United Kingdom Government and the accreditation process. Initially we look at the Waterfall methodology that has been used for several years. We discuss the pros and cons of Waterfall before moving onto the Agile Scrum methodology. Agile has been adopted by the majority of Government digital departments including the Government Digital Services. Agile, despite its ability to achieve high rates of productivity organized in short, flexible, iterations, has faced security professionals’ disbelief when working within the U.K. Government. One of the major issues is that we develop in Agile but the accreditation process is conducted using Waterfall resulting in delays to go live dates. Taking a brief look into the accreditation process that is used within Government for I.T. systems and applications, we focus on giving the accreditor the assurance they need when developing new applications and systems. A framework has been produced by utilising the Open Web Application Security Project’s (OWASP) Application Security Verification Standard (ASVS). This framework will allow security and Agile to work side by side and produce secure code.
Proposed Algorithm for Surveillance ApplicationsEditor IJCATR
Technological systems are vulnerable to faults. In many fault situations, the system operation has to be stopped to avoid
damage to machinery and humans. As a consequence, the detection and the handling of faults play an increasing role in modern
technology, where many highly automated components interact in a complex way such that a fault in a single component may cause
the malfunction of the whole system. This work introduces the main ideas of fault diagnosis and fault-tolerant control under the optics
of various research work done in this area. It presents the Arduino technology in both hardware and software sides. The purpose of this
paper is to propose a diagnostic algorithm based on this technology. A case study is proposed for this setting. Moreover, we explained
and discussed the result of our algorithm.
Formal Verification of Distributed Checkpointing Using Event-Bijcsit
The development of complex system makes challenging task for correct software development. Due to faulty
specification, software may involve errors. The traditional testing methods are not sufficient to verify the
correctness of such complex system. In order to capture correct system requirements and rigorous
reasoning about the problems, formal methods are required. Formal methods are mathematical techniques
that provide precise specification of problems with their solutions and proof of correctness. In this paper,
we have done formal verification of check pointing process in a distributed database system using Event B.
Event-B is an event driven formal method which is used to develop formal models of distributed database
systems. In a distributed database system, the database is stored at different sites that are connected
together through the network. Checkpoint is a recovery point which contains the state information about
the site. In order to do recovery of a distributed transaction a global checkpoint number (GCPN) is
required. A global checkpoint number decides which transaction will be included for recovery purpose. All
transactions whose timestamp are less than global checkpoint number will be marked as before checkpoint
transaction (BCPT) and will be considered for recovery purpose. The transactions whose timestamp are
greater than GCPN will be marked as after checkpoint transaction (ACPT) and will be part of next global
checkpoint number.
Practical Guidelines to Improve Defect Prediction Model – A Reviewinventionjournals
Defect prediction models are used to pinpoint risky software modules and understand past pitfalls that lead to defective modules. The predictions and insights that are derived from defect prediction models may not be accurate and reliable if researchers do not consider the impact of experimental components (e.g., datasets, metrics, and classifiers) of defect prediction modeling. Therefore, a lack of awareness and practical guidelines from previous research can lead to invalid predictions and unreliable insights. Through case studies of systems that span both proprietary and open-source domains, find that (1) noise in defect datasets; (2) parameter settings of classification techniques; and (3) model validation techniques have a large impact on the predictions and insights of defect prediction models, suggesting that researchers should carefully select experimental components in order to produce more accurate and reliable defect prediction models.
Measurement and Evaluation of Reliability, Availability and Maintainability o...IOSR Journals
The growing complexity of equipments and systems often lead to failures and as a consequence the
aspects of reliability, maintainability and availability have come into forefront. The failure of machineries and
equipments causes disruption in production resulting from a loss of availability of the system and also increases
the cost of maintenance. The present study deals with the determination of reliability and availability aspects of
one of the significant constituent in a Railway Diesel Locomotive Engine. In order to assess the availability
performance of these components, a broad set of studies has been carried out to gather accurate information at
the level of detail considered suitable to meet the availability analysis target. The Reliability analysis is
performed using the Weibull Distribution and the various data plots as well as failure rate information help in
achieving results that may be utilized in the near future by the Railway Locomotive Engines for reducing the
unexpected breakdowns and will enhance the reliability and availability of the Engine. In this work, ABC
analysis has been used for the maintenance of spare parts inventory. Here, Power pack assemblies, Engine
System are used to focus on the reliability, maintainability and availability aspects
A model for run time software architecture adaptationijseajournal
Since the global demand for software systems and constantly changing environments and systems is
increasing, the adaptability of software systems is of significant importance. Due to the architecture of
software system is a high-level view of the system and makes the modifiability possible at an overall level,
the adaptability of the software can be considered an effective approach to adapt software systems by
changing architecture configuration. In this study, the architecture configuration is modified through xADL
language which is a software architecture description language with a high flexibility. Software
architecture reconfiguration is done based on existing rules of rule-based system, which are written with
respect to three strategies of load balancing, fixed bandwidth and fixed latency. The proposed model of the
study is simulated based on samples of client-server system, video conferencing system and students’
grading system. The proposed model can be used in all types of architecture, include Client Server
Architecture, Service Oriented Architecture and etc.
Software testing effort estimation with cobb douglas function a practical app...eSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
Software testing effort estimation with cobb douglas function- a practical ap...eSAT Journals
Abstract Effort estimation is one of the critical challenges in Software Testing Life Cycle (STLC). It is the basis for the project’s effort estimation, planning, scheduling and budget planning. This paper illustrates model with an objective to depict the accuracy and bias variation of an organization’s estimates of software testing effort through Cobb-Douglas function (CDF). Data variables selected for building the model were believed to be vital and have significant impact on the accuracy of estimates. Data gathered for the completed projects in the organization for about 13 releases. Statistically, all variables in this model were statistically significant at p<0.05><0.01 levels. The Cobb-Douglas function was selected and used for the software testing effort estimation. The results achieved with CDF were compared with the estimates provided by the area expert. The model’s estimation figures are more accurate than the expert judgment. CDF has one of the appropriate techniques for estimating effort for software testing. CDF model accuracy is 93.42%.
SYSTEM IDENTIFICATION AND MODELING FOR INTERACTING AND NON-INTERACTING TANK S...ijistjournal
System identification from the experimental data plays a vital role for model based controller design. Derivation of process model from first principles is often difficult due to its complexity. The first stage in the development of any control and monitoring system is the identification and modeling of the system. Each model is developed within the context of a specific control problem. Thus, the need for a general system identification framework is warranted. The proposed framework should be able to adapt and emphasize different properties based on the control objective and the nature of the behavior of the system. Therefore, system identification has been a valuable tool in identifying the model of the system based on the input and output data for the design of the controller. The present work is concerned with the identification of transfer function models using statistical model identification, process reaction curve method, ARX model, genetic algorithm and modeling using neural network and fuzzy logic for interacting and non interacting tank process. The identification technique and modeling used is prone to parameter change & disturbance. The proposed methods are used for identifying the mathematical model and intelligent model of interacting and non interacting process from the real time experimental data.
SYSTEM IDENTIFICATION AND MODELING FOR INTERACTING AND NON-INTERACTING TANK S...ijistjournal
System identification from the experimental data plays a vital role for model based controller design. Derivation of process model from first principles is often difficult due to its complexity. The first stage in the development of any control and monitoring system is the identification and modeling of the system. Each model is developed within the context of a specific control problem. Thus, the need for a general system identification framework is warranted. The proposed framework should be able to adapt and emphasize different properties based on the control objective and the nature of the behavior of the system. Therefore, system identification has been a valuable tool in identifying the model of the system based on the input and output data for the design of the controller. The present work is concerned with the identification of transfer function models using statistical model identification, process reaction curve method, ARX model, genetic algorithm and modeling using neural network and fuzzy logic for interacting and non interacting tank process. The identification technique and modeling used is prone to parameter change & disturbance. The proposed methods are used for identifying the mathematical model and intelligent model of interacting and non interacting process from the real time experimental data.
ESTIMATING HANDLING TIME OF SOFTWARE DEFECTScsandit
The problem of accurately predicting handling time for software defects is of great practical
importance. However, it is difficult to suggest a practical generic algorithm for such estimates,
due in part to the limited information available when opening a defect and the lack of a uniform
standard for defect structure. We suggest an algorithm to address these challenges that is
implementable over different defect management tools. Our algorithm uses machine learning
regression techniques to predict the handling time of defects based on past behaviour of similar
defects. The algorithm relies only on a minimal set of assumptions about the structure of the
input data. We show how an implementation of this algorithm predicts defect handling time with
promising accuracy results
With the emergence of virtualization and cloud computing technologies, several services are housed on virtualization platform. Virtualization is the technology that many cloud service providers rely on for efficient management and coordination of the resource pool. As essential services are also housed on cloud platform, it is necessary to ensure continuous availability by implementing all necessary measures. Windows Active Directory is one such service that Microsoft developed for Windows domain networks. It is included in Windows Server operating systems as a set of processes and services for authentication and authorization of users and computers in a Windows domain type network. The service is required to run continuously without downtime. As a result, there are chances of accumulation of errors or garbage leading to software aging which in turn may lead to system failure and associated consequences. This results in software aging. In this work, software aging patterns of Windows active directory service is studied. Software aging of active directory needs to be predicted properly so that rejuvenation can be triggered to ensure continuous service delivery. In order to predict the accurate time, a model that uses time series forecasting technique is built.
The adoption of cloud environment for various application uses has led to security and privacy concern of user’s data. To protect user data and privacy on such platform is an area of concern.
Many cryptography strategy has been presented to provide secure sharing of resource on cloud platform. These methods tries to achieve a secure authentication strategy to realize feature such as self-blindable access tickets, group signatures, anonymous access tickets, minimal disclosure of tickets and revocation but each one varies in realization of these features. Each feature requires different cryptography mechanism for realization. Due to this it induces computation complexity which affects the deployment of these models in practical application. Most of these techniques are designed for a particular application environment and adopt public key cryptography which incurs high cost due to computation complexity.
To address these issues this work present an secure and efficient privacy preserving of mining data on public cloud platform by adopting party and key based authentication strategy. The proposed SCPPDM (Secure Cloud Privacy Preserving Data Mining) is deployed on Microsoft azure cloud platform. Experiment is conducted to evaluate computation complexity. The outcome shows the proposed model achieves significant performance interm of computation overhead and cost.
Unlock TikTok Success with Sociocosmos..SocioCosmos
Discover how Sociocosmos can boost your TikTok presence with real followers and engagement. Achieve your social media goals today!
https://www.sociocosmos.com/product-category/tiktok/
The Evolution of SEO: Insights from a Leading Digital Marketing AgencyDigital Marketing Lab
Explore the latest trends in Search Engine Optimization (SEO) and discover how modern practices are transforming business visibility. This document delves into the shift from keyword optimization to user intent, highlighting key trends such as voice search optimization, artificial intelligence, mobile-first indexing, and the importance of E-A-T principles. Enhance your online presence with expert insights from Digital Marketing Lab, your partner in maximizing SEO performance.
Grow Your Reddit Community Fast.........SocioCosmos
Sociocosmos helps you gain Reddit followers quickly and easily. Build your community and expand your influence.
https://www.sociocosmos.com/product-category/reddit/
“To be integrated is to feel secure, to feel connected.” The views and experi...AJHSSR Journal
ABSTRACT: Although a significant amount of literature exists on Morocco's migration policies and their
successes and failures since their implementation in 2014, there is limited research on the integration of subSaharan African children into schools. This paperis part of a Ph.D. research project that aims to fill this gap. It
reports the main findings of a study conducted with migrant children enrolled in two public schools in Rabat,
Morocco, exploring how integration is defined by the children themselves and identifying the obstacles that they
have encountered thus far. The following paper uses an inductive approach and primarily focuses on the
relationships of children with their teachers and peers as a key aspect of integration for students with a migration
background. The study has led to several crucial findings. It emphasizes the significance of speaking Colloquial
Moroccan Arabic (Darija) and being part of a community for effective integration. Moreover, it reveals that the
use of Modern Standard Arabic as the language of instruction in schools is a source of frustration for students,
indicating the need for language policy reform. The study underlines the importanceof considering the
children‟s agency when being integrated into mainstream public schools.
.
KEYWORDS: migration, education, integration, sub-Saharan African children, public school
Enhance your social media strategy with the best digital marketing agency in Kolkata. This PPT covers 7 essential tips for effective social media marketing, offering practical advice and actionable insights to help you boost engagement, reach your target audience, and grow your online presence.
Surat Digital Marketing School is created to offer a complete course that is specifically designed as per the current industry trends. Years of experience has helped us identify and understand the graduate-employee skills gap in the industry. At our school, we keep up with the pace of the industry and impart a holistic education that encompasses all the latest concepts of the Digital world so that our graduates can effortlessly integrate into the assigned roles.
This is the place where you become a Digital Marketing Expert.
Multilingual SEO Services | Multilingual Keyword Research | Filosemadisonsmith478075
Multilingual SEO services are essential for businesses aiming to expand their global presence. They involve optimizing a website for search engines in multiple languages, enhancing visibility, and reaching diverse audiences. Filose offers comprehensive multilingual SEO services designed to help businesses optimize their websites for search engines in various languages, enhancing their global reach and market presence. These services ensure that your content is not only translated but also culturally and contextually adapted to resonate with local audiences.
Visit us at -https://www.filose.com/
Improving Workplace Safety Performance in Malaysian SMEs: The Role of Safety ...AJHSSR Journal
ABSTRACT: In the Malaysian context, small and medium enterprises (SMEs) experience a significant
burden of workplace accidents. A consensus among scholars attributes a substantial portion of these incidents to
human factors, particularly unsafe behaviors. This study, conducted in Malaysia's northern region, specifically
targeted Safety and Health/Human Resource professionals within the manufacturing sector of SMEs. We
gathered a robust dataset comprising 107 responses through a meticulously designed self-administered
questionnaire. Employing advanced partial least squares-structural equation modeling (PLS-SEM) techniques
with SmartPLS 3.2.9, we rigorously analyzed the data to scrutinize the intricate relationship between safety
behavior and safety performance. The research findings unequivocally underscore the palpable and
consequential impact of safety behavior variables, namely safety compliance and safety participation, on
improving safety performance indicators such as accidents, injuries, and property damages. These results
strongly validate research hypotheses. Consequently, this study highlights the pivotal significance of cultivating
safety behavior among employees, particularly in resource-constrained SME settings, as an essential step toward
enhancing workplace safety performance.
KEYWORDS :Safety compliance, safety participation, safety performance, SME
Buy Pinterest Followers, Reactions & Repins Go Viral on Pinterest with Socio...SocioCosmos
Get more Pinterest followers, reactions, and repins with Sociocosmos, the leading platform to buy all kinds of Pinterest presence. Boost your profile and reach a wider audience.
https://www.sociocosmos.com/product-category/pinterest/
Your Path to YouTube Stardom Starts HereSocioCosmos
Skyrocket your YouTube presence with Sociocosmos' proven methods. Gain real engagement and build a loyal audience. Join us now.
https://www.sociocosmos.com/product-category/youtube/
BIO-INSPIRED MODELLING OF SOFTWARE VERIFICATION BY MODIFIED MORAN PROCESSES
1. International Journal of Computer Science, Engineering and Applications (IJCSEA) Vol.5, No.3, June 2015
DOI : 10.5121/ijcsea.2015.5301 1
BIO-INSPIRED MODELLING OF SOFTWARE
VERIFICATION BY MODIFIED MORAN PROCESSES
Sven Söhnlein
Method Park Engineering GmbH, Wetterkreuz 19a, Erlangen, Germany
ABSTRACT
A new approach for the control and prediction of verification activities for large safety-relevant software
systems will be presented in this paper. The model is applied on a macroscopic system level and based on
so-called Moran processes, which originate from mathematical biology and allow for the description
ofphenomena as, for instance, genetic drift. Beside the theoretical foundations of this novel approach, its
application on a real-world example from the medical engineering domain will be discussed.
KEYWORDS
Modelling,Simulation, Dependability, Reliability, Software Engineering
1. INTRODUCTION
The development of safety-relevant software systems usually underlies very strict regulations
prescribed by corresponding standards, like the IEC 62304 for medical device software [1], for
example. In order to provide the necessary control and prediction instruments for the required
verification activities of such applications, the use of software reliability models seems to be
reasonable. Here, a huge spectrum of different theoreticalapproaches is available in the literature
(see [2, 3, 4] for an overview). But the problems in the practical implementation of such models
in a real-world software lifecycle process are manifold:
First of all, the usually very strict (and non-verifiable model assumptions [2]) are not flexible
enough to map also in cases of continuous integration paradigms [5] or post-development phases,
where patches or add-ons are integrated [6]. Moreover, these assumptions are usually not implied
from the relevant standards and regulations, but are frequently model-intrinsic [2]. In addition to
that, implications that come from typical management necessities in those areas are
predominantly ignored [7].
With regard to these determining factors, we propose a practical model that applies on a
macroscopic level of large systems and takes into account regulative prescriptions regarding the
lifecycle process, software architecture, as well as planning and management demands. The
introduced model is inspired by mathematical concepts, that were originally applied to describe
biological processes in finite populations.
2. International Journal of Computer Science, Engineering and Applications (IJCSEA) Vol.5, No.3, June 2015
2
1.1. Paper Structure
The paper is organized as follows: In section 2, the relevant regulative and organisational factors
are determined, which will be used in the following to derive the theoretical basis for the model.
The approach itself will be introduced in section 3. Section 4 illustrates the application of the
model on a real-world system from the medical engineering domain, followed by a conclusion in
section 5.
2. DETERMINING REGULATIVE AND ORGANISATIONAL FACTORS
In order to derive an adequate context-specific model, one has to analyse the implications that
come from the corresponding standards in the particular application domain. In case of medical
device software, the IEC 62304 [1] represents the relevant norm (where it should be stated, that
similar standards exist for other safety-relevant applications, like the ISO 26262 [8] for the
automotive domain, for instance).
In the following, the key-aspects carvedout from the regulative and organisational prescriptions
will be highlighted, and referenced in subsequent sections as a basis for the provided model.
Regulative Factors:
R1. Software Lifecycle Process: The development underlies a strict plan-driven software
lifecycle process (like the V-Model [9] or the Waterfall-Model [9]). This implies particularly that
at least every requirement has to be verified by one (or more) corresponding test casesor by
another adequate verification technique [1].
R2. Software Architecture: The subdivision of the software system into interacting components
and units must be described and documented. With regard to this modularization, software units
represent the smallest atomic parts in the software architecture [1], whereas components in turn
consist of a finite number of units [1].
R3. Quality Management System: The IEC 62304 [1] prescribes a quality management system
(as defined by the ISO 13485 [10], for instance). Thus, it is required to define quality goals and
verify to which extend they are fulfilled.
Organisational Factors:
O1. Verification and Correction Phases: The typical management procedure [11] for the
verification process in the considered domain consists of a timely subdivided organization of
verification and correction phases, which consist of a certain subset of the overall number of
planned test cases.
O2. Impact Analysis:In advance to every correction, an impact analysis [12] is performed in
order to reveal the number of units that will be “touched” in the subsequent correction phase.
O3. Statistical Process Control: Statistical process control [13] is performed with the intentto
derive measures (considering the progress of verification and correction activities)from past
projectswith regard to the current or upcoming one.
3. International Journal of Computer Science, Engineering and Applications (IJCSEA) Vol.5, No.3, June 2015
3
Delimitation of Consideration:
Further, the scope of consideration will be delimited as follows:
D1. Classification of Software Units:Software units represent the smallest parts of consideration
and will be classified as ‘correct’ XOR ‘faulty’ (with no further distinction regarding the involved
code parts).
D2. Correction of Faults: Faulty software units which are corrected during a verification and
correction phase, change their classification status from ‘faulty’ to ‘correct’.
D3. Insertion of Faults:The correction process is not perfect, i.e. it also has the potential to inject
new faults into the system, which is represented by a change of the classification status of a
software unit from ‘correct’ to ‘faulty’.
Taking all these aspects into account, the following relation between the relevant elements of the
verification and correction process can be established (see figure 1), where(with = 1, … , )
denotes a requirement, (with = 1, … , ) a test case, (with = 1, … , ) a software unit and
(with = 1, … , ) a component:
Each requirement is at least verified by one or more test cases (with regard to assumption R1),
where test cases “spot” the ‘faulty’ (or ‘correct’) units within certain components of the system
(with regard to assumptions R2 and D1). The software units to be “touched” in the subsequent
correction phase are revealed by the performed impact analysis (with regard to assumption O2).
These software units might thereby change its classification status from ‘faulty’ to ‘correct’
(which is the more probable case) but possibly also from ‘correct’ to ‘faulty’ (with regard to
assumptions D2 and D3).
4. International Journal of Computer Science, Engineering and Applications (IJCSEA) Vol.5, No.3, June 2015
4
Figure 1. Relation between verification and correction elements
5. International Journal of Computer Science, Engineering and Applications (IJCSEA) Vol.5, No.3, June 2015
5
3. MODELLING SOFTWARE VERIFICATION VIA MORAN PROCESSES
Moran processes are stochastic models that originate from mathematical biology and are used to
describe, for instance, mutations in finite populations (see [14, 15] for an introduction). In the
basic model, a finite population of size ∈ ℕ consists of two alleles (let’s say ‘green’ and ‘red’),
which are competing for dominance. In each time step, a random individual is chosen for
reproduction and another one is chosen for death, thus ensuring a constant population size. The
“fitness” of the alleles hereby determines how likely they are to be chosen for reproduction and
thereforeaffects the time for fixation (i. e. the time for taking over the whole population).
In order to map this biological model to theconsidered software context, the discussed aspects
from section 2 are addressed as follows: The whole software system (which can be interpreted as
the DNA [16]) consists of components (DNA Segments) that consist of a finite population of
units (genes [16]). Those units (genes) can be classified into two categories (alleles [16]) marked
‘correct’ (green) XOR ‘faulty’ (red). A single unit can shift its classification (allel) from ‘correct’
to ‘faulty’ or from ‘faulty’ to ‘correct’ in one time step (which represents the mutation process
[16]). This means that the whole verification and correction process can be considered as the
genetic drift [16] in the software system, where the goodness of the process is affected by the
fitness of the alleles. Table 2 shows an overview of the corresponding elements from both worlds.
Table 1. Mapping oftechnical and biological elements
Software World Biological World
Software System DNA
Component DNA Segment
Unit Gene
Classification of a unit {‘correct’, ‘faulty’} Allel {‘green’, ‘red’}
Correction of a fault / Insertion of a fault Mutation
Verification and correction process Genetic drift
Goodness of the verification and correction activities Fitness
In accordance to this mapping, the verification and correction process can be described by an
irreducible ergodic discrete-time Markov chain (DTMC [15])
() with ∈ ℕ
where () denotes the family of random variables (indexed by the discrete time ). The process
underlies a finite state space ∈ ℕ, where
|| = + 1
and ∈ ℕ represents the number of software units in the system. Every state ∈ (with =
0, … , ) is hereby associated with a software system consisting of correct (verified) software
units (and − faulty units).
6. International Journal of Computer Science, Engineering and Applications (IJCSEA) Vol.5, No.3, June 2015
6
With regard to assumption O1 (see section 2), it is assumed that in verification and correction
phase (with ∈ ℕ and = 1, … , ) where represents the overall number of verification
and correction phases, we reach a certain state .Then, the required impact analysis (see
assumption O2) will reveal the number of software units, that is “touched” in the next verification
and correction phase !and therefore implies the expected number of time steps for the
Moran process in the subsequent phase (see figure 2 for an illustration).
Figure 2. Verification and correction phases ( ) in the Moran process
Therefore, the DTMC for the Moran Process described above can be defined by the || × ||
transition matrix#, where the entries of# are specified as
#, ! =
φ
∙
φ
∙ + −
∙
−
for 0 ≤ (1)
#,'! =
−
φ
∙ + −
∙
for 0 ≤ (2)
#, = 1 − #,'! − #, ! for 0 (3)
#, = 1 for = 0 ∨ = (4)
and all other entries of #are zero, which results in a triangular matrix. Here, φ
represents the
mentioned phase-specific “fitness” of the verification and correction activities and can be derived
by statistical process control techniques (see assumption O3 in section 2). A coarse
approximation of φ
might be estimated by the fraction of successfully corrected components in
relation to the inserted faults. Note that in contrast to the original Moran process model [14], the
fitness is not fixed here, but changes in accordance to the phase of the whole verification and
correction process, which is reasonable with regard to the different preconditions in each phase.
In general, φ
can be categorized as follows:
φ
1: This is the usual and expected case, where (significantly) more faults are
detected and corrected than injected.
φ
= 1: In this case, we have a “neutral” drift (and an unsystematic verification and
correction process).
φ
1: This is the unusual and unexpected case, where more new faults are injected in
the system than detected and corrected.
i-1 i+1
i
!
7. International Journal of Computer Science, Engineering and Applications (IJCSEA) Vol.5, No.3, June 2015
7
The defined Moran process is initialized by the start vector +(,)
with entries
+-
(,)
= 1 for . = 1 (5)
+-
(,)
= 0 for . ≠ 1 (6)
which means that at least one correct unit is available at the beginning. Further, we denote by 0
(with 0∈ ℕ and 0 = 0, … , )the number of software units to be touched in a certain (see
the presumed impact analysis O2). Then, if the first verification and correction phase !
reveals, that the number of software units to be touched in this phase is 0!, than the state vector of
the Moran process at this stage is computed by
+(!)
= +(,)
∙ #(12) (7)
More generally, if in phase , we reach a certain state (which is associated with a software
system of already verified software units),than the state vector for phase ! is computed by
+( !)
= +()
∙ #(1342) (8)
with
+-
()
= 1 for . = (9)
+-
()
= 0 for . ≠ (10)
Moreover, by
σ = max
-9,,…,:
+-
()
(11)
the most probable state .;=
()
at can be determined with
+-?@
()
= σ (12)
In order to estimate, if predefined reliability targets (see assumption R3) are met (in terms of the
minimum number of components that have to be correct after a certain verification and correction
phase), we denote by A;B(C) the probability, that in phase we have at least C correct
components, which can be computed by
A;B(C) = D +-
()
:
-9E3
(13)
8. International Journal of Computer Science, Engineering and Applications (IJCSEA) Vol.5, No.3, June 2015
8
4. EXAMPLE APPLICATION
In this section, the application of the model on a real-world system (from the medical engineering
domain) will be discussed.The model was applied in order to assess the progress of the
verification and correction activities, with a specific focus on the reachability of the predefined
reliability targets. The system consisted of an overall number of = 363 software units. The
verification and correction process was subdivided into = 5 phases. Table 2 shows the number
of software units 0 that were touched in each phase (estimated by the corresponding impact
analysis), the verification fitness φ
for each phase (estimated by the application of the previously
mentioned statistical process control techniques), the predefined reliability targetC for each
phase (as an outcome from the project and risk management activities)as well as the computed
measures according to equations (1) – (13) from section 4.
Table 2. Computed measures for the example system
0
0
∙ 100% Σ0 Σ13
:
∙ 100% φ C
C
∙ 100% .;=
() A;B(C)
1 114 31,40 114 31,40 37,96 72,60 20,00 65 0.14
2 95 26,17 209 57,57 18,79 163,35 45,00 168 0.69
3 74 20,39 283 77,96 13,67 235,95 65,00 234 0.40
4 53 14,60 336 92,56 11,32 290,40 80,00 293 0.87
5 27 7,44 363 100,00 9,41 326,70 90,00 337 0.99
If we look at the predefined reliability target C and the computed most probable state .;=
()
for
each phase, we can see how the predefinition differs from the prediction according to the varying
fitness and number of touched components in each phase. While for phases K, L and M,
the most probable states are pretty close to the predefined targets, the discrepancies in phases
! and N are comparatively high. And apart from !and L, the predefined targets
underestimated the most probable states in this case, which is also illustrated in figure 3. But this
might be a little bit misleading with regard to the computed probabilities A;B(C) of reaching
the predefined goals. Here, only M and N establish a substantial confidence in the
reachability of the predefined quality goals, which is also shown in figure 4.
9. International Journal of Computer Science, Engineering and Applications (
Figure 3. Comparison of predefined and predicted measures
Figure 4. Evolving probability of meeting the
The computed measures illustrate
targets for the verification an correction
5. CONCLUSIONS
In this paper, a novel approach for
software systems was introduced. Beside the derivation of the theoretical foundations of th
model, its application on a real
International Journal of Computer Science, Engineering and Applications (IJCSEA) Vol.5, No.3, June 2015
Figure 3. Comparison of predefined and predicted measures
Figure 4. Evolving probability of meeting the predefined targets
lustrate how the introduced model can be utilized to adjustpredefined
the verification an correction phases.
roach for the support of correction processes of large safety
was introduced. Beside the derivation of the theoretical foundations of th
its application on a real-world example was also shown. Thereby it could be
IJCSEA) Vol.5, No.3, June 2015
9
to adjustpredefined
safety-relevant
was introduced. Beside the derivation of the theoretical foundations of this
world example was also shown. Thereby it could be
10. International Journal of Computer Science, Engineering and Applications (IJCSEA) Vol.5, No.3, June 2015
10
demonstrated, how this technique can serve as an instrument for the planning and control of the
verification activities in such an environment.
REFERENCES
[1] International Electrotechnical Commission: Medical device software - Software life-cycle processes,
IEC62304:2006 (2006).
[2] M. R. Lyu (Editor), Handbook of Software Reliability Engineering, IEEE Computer Society Press,
McGraw-Hill, 1996.
[3] J. D. Musa, Software Reliability Engineering, McGraw-Hill, 1999.
[4] D. P. Siewiorek and R. S. Swarz, The Theory and Practice of Reliable System Design,Digital Press,
1982.
[5] P. M. Duvall et al., Continuous Integration: Improving Software Quality and Reducing Risk,
Addison-Wesley, 2007.
[6] S. S. Yau and J. S. Collofello, “Design Stability Measures for Software Maintenance”, IEEE
Transactions on Software Engineering, Vol. 11 (9), pp. 84-856, 1985.
[7] S. R. Rakitin, Software Verification and Validation for Practitioners and Managers, 2nd ed., Artech
House, Inc., 2001.
[8] ISO 26262-1:2011(en) Road vehicles - Functional safety, International Standardization Organization
(2011).
[9] I. Sommerville, Software Engineering, 9th ed., Pearson, 2012.
[10] ISO 13485:2003 Medical devices - Quality management systems - Requirements for regulatory
purposes (2003).
[11] M. Pol et al., Software Testing: A Guide to the TMap Approach, Addison-Wesley Professional, 2001.
[12] K. Fisler et al.,Verification and change-impact analysis of access-control policies, Proceedings of
the 27th international conference on Software engineering, ACM, 2005.
[13] J. S. Oakland, Statistical process control, Routledge, 2008.
[14] P. A. P. Moran, Random processes in genetics, Mathematical Proceedings of the Cambridge
Philosophical Society, Vol. 54. (1), Cambridge University Press, 1958.
[15] M. A. Nowak, Evolutionary dynamics, Harvard University Press, 2006.
[16] K. S. Trivedi, Probability Statistics with Reliability, Queuing and Computer Science Applications,
PHI Learning Pvt. Limited, 2011.
AUTHOR
Dr. Sven Söhnlein received a PhD in Engineering and a MSc in Computer Science from the University of
Erlangen-Nürnberg (Germany). Until 2014 he was a Senior Researcher at the University of Erlangen-
Nürnberg and is currently working for the company Method Park Engineering GmbH.