The use of open source software is becoming more and more predominant and it is important that the reliability of this software are evaluated. Even though a lot of researchers have tried to establish the failure pattern of different packages a deterministic model for evaluating reliability is not yet developed. The present work details a simplified model for evaluating the reliability of the open source software based on the available failure data. The methodology involves identifying a fixed number of packages at the start of the time and defining the failure rate based on the failure data for these preset number of packages. The defined function of the failure rate is used to arrive at the reliability model. The reliability values obtained using the developed model are also compared with the exact reliability values. Key words: Bugs, Failure density, Failure rate, Open source software, Reliability
International Journal of Computational Engineering Research(IJCER) is an intentional online Journal in English monthly publishing journal. This Journal publish original research work that contributes significantly to further the scientific knowledge in engineering and Technology
Software testing defect prediction model a practical approacheSAT Journals
Abstract Software defects prediction aims to reduce software testing efforts by guiding the testers through the defect classification of software systems. Defect predictors are widely used in many organizations to predict software defects in order to save time, improve quality, testing and for better planning of the resources to meet the timelines. The application of statistical software testing defect prediction model in a real life setting is extremely difficult because it requires more number of data variables and metrics and also historical defect data to predict the next releases or new similar type of projects. This paper explains our statistical model, how it will accurately predict the defects for upcoming software releases or projects. We have used 20 past release data points of software project, 5 parameters and build a model by applying descriptive statistics, correlation and multiple linear regression models with 95% confidence intervals (CI). In this appropriate multiple linear regression model the R-square value was 0.91 and its Standard Error is 5.90%. The Software testing defect prediction model is now being used to predict defects at various testing projects and operational releases. We have found 90.76% precision between actual and predicted defects.
A case study trying to answering the question "Are there statistical correlations between statement coverage and the number of failures detected?" and running a comparison between different reliability growth models
Software testing is an important activity of the software development process. Software testing is most
efforts consuming phase in software development. One would like to minimize the effort and maximize the
number of faults detected and automated test case generation contributes to reduce cost and time effort.
Hence test case generation may be treated as an optimization problem In this paper we have used genetic
algorithm to optimize the test case that are generated applying conditional coverage on source code. Test
case data is generated automatically using genetic algorithm are optimized and outperforms the test cases
generated by random testing.
Software analytics (for software quality purpose) is a statistical or machine learning classifier that is trained to identify defect-prone software modules. The goal of software analytics is to help software engineers prioritize their software testing effort on the most-risky modules and understand past pitfalls that lead to defective code. While the adoption of software analytics enables software organizations to distil actionable insights, there are still many barriers to broad and successful adoption of such analytics systems. Indeed, even if software organizations can access such invaluable software artifacts and toolkits for data analytics, researchers and practitioners often have little knowledge to properly develop analytics systems. Thus, the accuracy of the predictions and the insights that are derived from analytics systems is one of the most important challenges of data science in software engineering.
In this work, we conduct a series of empirical investigation to better understand the impact of experimental components (i.e., class mislabelling, parameter optimization of classification techniques, and model validation techniques) on the performance and interpretation of software analytics. To accelerate a large amount of compute-intensive experiment, we leverage the High-Performance-Computing (HPC) resources of Centre for Advanced Computing (CAC) from Queen’s University, Canada. Through case studies of systems that span both proprietary and open- source domains, we demonstrate that (1) realistic noise does not impact the precision of software analytics; (2) automated parameter optimization for classification techniques substantially improve the performance and stability of software analytics; and (3) the out-of- sample bootstrap validation technique produces a good balance between bias and variance of performance estimates. Our results lead us to conclude that the experimental components of analytics modelling impact the predictions and associated insights that are derived from software analytics. Empirical investigations on the impact of overlooked experimental components are needed to derive practical guidelines for analytics modelling.
International Journal of Computational Engineering Research(IJCER) is an intentional online Journal in English monthly publishing journal. This Journal publish original research work that contributes significantly to further the scientific knowledge in engineering and Technology
Software testing defect prediction model a practical approacheSAT Journals
Abstract Software defects prediction aims to reduce software testing efforts by guiding the testers through the defect classification of software systems. Defect predictors are widely used in many organizations to predict software defects in order to save time, improve quality, testing and for better planning of the resources to meet the timelines. The application of statistical software testing defect prediction model in a real life setting is extremely difficult because it requires more number of data variables and metrics and also historical defect data to predict the next releases or new similar type of projects. This paper explains our statistical model, how it will accurately predict the defects for upcoming software releases or projects. We have used 20 past release data points of software project, 5 parameters and build a model by applying descriptive statistics, correlation and multiple linear regression models with 95% confidence intervals (CI). In this appropriate multiple linear regression model the R-square value was 0.91 and its Standard Error is 5.90%. The Software testing defect prediction model is now being used to predict defects at various testing projects and operational releases. We have found 90.76% precision between actual and predicted defects.
A case study trying to answering the question "Are there statistical correlations between statement coverage and the number of failures detected?" and running a comparison between different reliability growth models
Software testing is an important activity of the software development process. Software testing is most
efforts consuming phase in software development. One would like to minimize the effort and maximize the
number of faults detected and automated test case generation contributes to reduce cost and time effort.
Hence test case generation may be treated as an optimization problem In this paper we have used genetic
algorithm to optimize the test case that are generated applying conditional coverage on source code. Test
case data is generated automatically using genetic algorithm are optimized and outperforms the test cases
generated by random testing.
Software analytics (for software quality purpose) is a statistical or machine learning classifier that is trained to identify defect-prone software modules. The goal of software analytics is to help software engineers prioritize their software testing effort on the most-risky modules and understand past pitfalls that lead to defective code. While the adoption of software analytics enables software organizations to distil actionable insights, there are still many barriers to broad and successful adoption of such analytics systems. Indeed, even if software organizations can access such invaluable software artifacts and toolkits for data analytics, researchers and practitioners often have little knowledge to properly develop analytics systems. Thus, the accuracy of the predictions and the insights that are derived from analytics systems is one of the most important challenges of data science in software engineering.
In this work, we conduct a series of empirical investigation to better understand the impact of experimental components (i.e., class mislabelling, parameter optimization of classification techniques, and model validation techniques) on the performance and interpretation of software analytics. To accelerate a large amount of compute-intensive experiment, we leverage the High-Performance-Computing (HPC) resources of Centre for Advanced Computing (CAC) from Queen’s University, Canada. Through case studies of systems that span both proprietary and open- source domains, we demonstrate that (1) realistic noise does not impact the precision of software analytics; (2) automated parameter optimization for classification techniques substantially improve the performance and stability of software analytics; and (3) the out-of- sample bootstrap validation technique produces a good balance between bias and variance of performance estimates. Our results lead us to conclude that the experimental components of analytics modelling impact the predictions and associated insights that are derived from software analytics. Empirical investigations on the impact of overlooked experimental components are needed to derive practical guidelines for analytics modelling.
A Review on Parameter Estimation Techniques of Software Reliability Growth Mo...Editor IJCATR
Software reliability is considered as a quantifiable metric, which is defined as the probability of a software to operate
without failure for a specified period of time in a specific environment. Various software reliability growth models have been proposed
to predict the reliability of a software. These models help vendors to predict the behaviour of the software before shipment. The
reliability is predicted by estimating the parameters of the software reliability growth models. But the model parameters are generally
in nonlinear relationships which creates many problems in finding the optimal parameters using traditional techniques like Maximum
Likelihood and least Square Estimation. Various stochastic search algorithms have been introduced which have made the task of
parameter estimation, more reliable and computationally easier. Parameter estimation of NHPP based reliability models, using MLE
and using an evolutionary search algorithm called Particle Swarm Optimization, has been explored in the paper.
Software Quality Assurance (SQA) teams play a critical role in the software development process to ensure the absence of software defects. It is not feasible to perform exhaustive SQA tasks (i.e., software testing and code review) on a large software product given the limited SQA resources that are available. Thus, the prioritization of SQA efforts is an essential step in all SQA efforts. Defect prediction models are used to prioritize risky software modules and understand the impact of software metrics on the defect-proneness of software modules. The predictions and insights that are derived from defect prediction models can help software teams allocate their limited SQA resources to the modules that are most likely to be defective and avoid common past pitfalls that are associated with the defective modules of the past. However, the predictions and insights that are derived from defect prediction models may be inaccurate and unreliable if practitioners do not control for the impact of experimental components (e.g., datasets, metrics, and classifiers) on defect prediction models, which could lead to erroneous decision-making in practice. In this thesis, we investigate the impact of experimental components on the performance and interpretation of defect prediction models. More specifically, we investigate the impact of the three often overlooked experimental components (i.e., issue report mislabelling, parameter optimization of classification techniques, and model validation techniques) have on defect prediction models. Through case studies of systems that span both proprietary and open-source domains, we demonstrate that (1) issue report mislabelling does not impact the precision of defect prediction models, suggesting that researchers can rely on the predictions of defect prediction models that were trained using noisy defect datasets; (2) automated parameter optimization for classification techniques substantially improve the performance and stability of defect prediction models, as well as they change their interpretation, suggesting that researchers should no longer shy from applying parameter optimization to their models; and (3) the out-of-sample bootstrap validation technique produces a good balance between bias and variance of performance estimates, suggesting that the single holdout and cross-validation families that are commonly-used nowadays should be avoided.
With the rise of software systems ranging from personal assistance to the nation's facilities, software defects become more critical concerns as they can cost millions of dollar as well as impact human lives. Yet, at the breakneck pace of rapid software development settings (like DevOps paradigm), the Quality Assurance (QA) practices nowadays are still time-consuming. Continuous Analytics for Software Quality (i.e., defect prediction models) can help development teams prioritize their QA resources and chart better quality improvement plan to avoid pitfalls in the past that lead to future software defects. Due to the need of specialists to design and configure a large number of configurations (e.g., data quality, data preprocessing, classification techniques, interpretation techniques), a set of practical guidelines for developing accurate and interpretable defect models has not been well-developed.
The ultimate goal of my research aims to (1) provide practical guidelines on how to develop accurate and interpretable defect models for non-specialists; (2) develop an intelligible defect model that offer suggestions how to improve both software quality and processes; and (3) integrate defect models into a real-world practice of rapid development cycles like CI/CD settings. My research project is expected to provide significant benefits including the reduction of software defects and operating costs, while accelerating development productivity for building software systems in many of Australia's critical domains such as Smart Cities and e-Health.
Determination of Software Release Instant of Three-Tier Client Server Softwar...Waqas Tariq
Quality of any software system mainly depends on how much time testing take place, what kind of testing methodologies are used, how complex the software is, the amount of efforts put by software developers and the type of testing environment subject to the cost and time constraint. More time developers spend on testing more errors can be removed leading to better reliable software but then testing cost will also increase. On the contrary, if testing time is too short, software cost could be reduced provided the customers take risk of buying unreliable software. However, this will increase the cost during operational phase since it is more expensive to fix an error during operational phase than during testing phase. Therefore it is essentially important to decide when to stop testing and release the software to customers based on cost and reliability assessment. In this paper we present a mechanism of when to stop testing process and release the software to end-user by developing a software cost model with risk factor. Based on the proposed method we specifically address the issues of how to decide that we should stop testing and release the software based on three-tier client server architecture which would facilitates software developers to ensure on-time delivery of a software product meeting the criteria of achieving predefined level of reliability and minimizing the cost. A numerical example has been cited to illustrate the experimental results showing significant improvements over the conventional statistical models based on NHPP.
Reliability is concerned with decreasing faults and their impact. The earlier the faults are detected the better. That's why this presentation talks about automated techniques using machine learning to detect faults as early as possible.
Optimal Selection of Software Reliability Growth Model-A StudyIJEEE
People use software and sometime software fails.so they try to quantify software reliability and try to understand how and why it fails.For this purpose so many software Reliability models have been developed to estimate the defects in the software while delivering it to the customer.Till now so many software Reliability models have been developed,but main issue is that it remain largely unsolved that how to calculate software reliability efficiently.In everycircumstance we cannotuse one model because no single model can completely represent all features.This paper describes the circumstances and criteria under which particular model can be selected.
Software Reliability models have been in existence since the early 1970, over 200 have been developed. Some of the older models have been discarded based upon more recent information about the assumptions, and newer ones have replaced them.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
A metrics suite for variable categorizationt to support program invariants[IJCSEA Journal
Invariants are generally implicit. Explicitly stating program invariants, help programmers to identify
program properties that must be preserved while modifying the code. Existing dynamic techniques detect
invariants which includes both relevant and irrelevant/unused variables and thereby relevant and
irrelevant invariants involved in the program. Due to the presence of irrelevant variables and irrelevant
invariants, speed and efficiency of techniques are affected. Also, displaying properties about irrelevant
variables and irrelevant invariants distracts the user from concentrating on properties of relevant
variables. To overcome these deficiencies only relevant variables are considered by ignoring irrelevant
variables. Further, relevant variables are categorized as design variables and non-design variables. For
this purpose a metrics suite is proposed. These metrics are validated against Weyuker’s principles and
applied on RFV and JLex open source software. Similarly, relevant invariants are categorized as design
invariants, non-design invariants and hybrid invariants. For this purpose a set of rules are proposed. This
entire process enormously improves the speed and efficiency of dynamic invariant detection techniques
The reliability of a prediction model depends on the quality of the data from which it was trained. Therefore, defect prediction models may be unreliable if they are trained using noisy data. Recent research suggests that randomly-injected noise that changes the classification (label) of software modules from defective to clean (and vice versa) can impact the performance of defect models. Yet, in reality, incorrectly labelled (i.e., mislabelled) issue reports are likely non-random. In this paper, we study whether mislabelling is random, and the impact that realistic mislabelling has on the performance and interpretation of defect models. Through a case study of 3,931 manually-curated issue reports from the Apache Jackrabbit and Lucene systems, we find that: (1) issue report mislabelling is not random; (2) precision is rarely impacted by mislabelled issue reports, suggesting that practitioners can rely on the accuracy of modules labelled as defective by models that are trained using noisy data; (3) however, models trained on noisy data typically achieve 56%-68% of the recall of models trained on clean data; and (4) only the metrics in top influence rank of our defect models are robust to the noise introduced by mislabelling, suggesting that the less influential metrics of models that are trained on noisy data should not be interpreted or used to make decisions.
Specification-based Verification of Incomplete ProgramsIDES Editor
Recently, formal methods like model checking or
theorem proving have been considered efficient tools for
software verification. However, when practically applied, those
techniques suffer high complexity cost. Combining static
analysis with dynamic checking to deal with this problem has
been becoming an emerging trend, which results in the
introduction of concolic testing technique and its variations.
However, the analysis-based verification techniques always
assume the availability of full source code of the verified
program, which does not always hold in real life contexts. In
this paper, we propose an approach to tackle this problem,
where our contributed ideas are (i) combining function
specification with control flow analysis to deal with sourcemissing
function; (ii) generating self-complete programs from
incomplete programs by means of concrete execution, thus
making them fully verifiable by model checking; and (iii)
developing a constraint-based test-case generation technique
to significantly reduce the complexity. Our solution has been
proved viable when successfully deployed for checking
programming work of students.
Principles of Good Screen Design in WebsitesWaqas Tariq
Visual techniques for proper arrangement of the elements on the user screen have helped the designers to make the screen look good and attractive. Several visual techniques emphasize the arrangement and ordering of the screen elements based on particular criteria for best appearance of the screen. This paper investigates few significant visual techniques in various web user interfaces and showcases the results for better understanding and their presence.
A Method for Red Tide Detection and Discrimination of Red Tide Type (spherica...Waqas Tariq
A method for red tide detection and discrimination of red tide type (spherical and non-spherical shapes of red tide type) through polarization measurements of sea surface is proposed. There are a variety of shapes of red tide types, spherical and non-spherical. Polarization characteristics of such different shapes of red tide type are different so that discrimination can be done through polarization measurement of sea surface. Through laboratory based experiments with chattonella antiqua containing water and just water as well as chattonella marina and chattonella globossa containing water, it is confirmed that the proposed method is valid in laboratory basis. Also field experimental results, which are conducted at Ariaki Sea in Kyushu, Japan, show that the proposed method is valid.
A Review on Parameter Estimation Techniques of Software Reliability Growth Mo...Editor IJCATR
Software reliability is considered as a quantifiable metric, which is defined as the probability of a software to operate
without failure for a specified period of time in a specific environment. Various software reliability growth models have been proposed
to predict the reliability of a software. These models help vendors to predict the behaviour of the software before shipment. The
reliability is predicted by estimating the parameters of the software reliability growth models. But the model parameters are generally
in nonlinear relationships which creates many problems in finding the optimal parameters using traditional techniques like Maximum
Likelihood and least Square Estimation. Various stochastic search algorithms have been introduced which have made the task of
parameter estimation, more reliable and computationally easier. Parameter estimation of NHPP based reliability models, using MLE
and using an evolutionary search algorithm called Particle Swarm Optimization, has been explored in the paper.
Software Quality Assurance (SQA) teams play a critical role in the software development process to ensure the absence of software defects. It is not feasible to perform exhaustive SQA tasks (i.e., software testing and code review) on a large software product given the limited SQA resources that are available. Thus, the prioritization of SQA efforts is an essential step in all SQA efforts. Defect prediction models are used to prioritize risky software modules and understand the impact of software metrics on the defect-proneness of software modules. The predictions and insights that are derived from defect prediction models can help software teams allocate their limited SQA resources to the modules that are most likely to be defective and avoid common past pitfalls that are associated with the defective modules of the past. However, the predictions and insights that are derived from defect prediction models may be inaccurate and unreliable if practitioners do not control for the impact of experimental components (e.g., datasets, metrics, and classifiers) on defect prediction models, which could lead to erroneous decision-making in practice. In this thesis, we investigate the impact of experimental components on the performance and interpretation of defect prediction models. More specifically, we investigate the impact of the three often overlooked experimental components (i.e., issue report mislabelling, parameter optimization of classification techniques, and model validation techniques) have on defect prediction models. Through case studies of systems that span both proprietary and open-source domains, we demonstrate that (1) issue report mislabelling does not impact the precision of defect prediction models, suggesting that researchers can rely on the predictions of defect prediction models that were trained using noisy defect datasets; (2) automated parameter optimization for classification techniques substantially improve the performance and stability of defect prediction models, as well as they change their interpretation, suggesting that researchers should no longer shy from applying parameter optimization to their models; and (3) the out-of-sample bootstrap validation technique produces a good balance between bias and variance of performance estimates, suggesting that the single holdout and cross-validation families that are commonly-used nowadays should be avoided.
With the rise of software systems ranging from personal assistance to the nation's facilities, software defects become more critical concerns as they can cost millions of dollar as well as impact human lives. Yet, at the breakneck pace of rapid software development settings (like DevOps paradigm), the Quality Assurance (QA) practices nowadays are still time-consuming. Continuous Analytics for Software Quality (i.e., defect prediction models) can help development teams prioritize their QA resources and chart better quality improvement plan to avoid pitfalls in the past that lead to future software defects. Due to the need of specialists to design and configure a large number of configurations (e.g., data quality, data preprocessing, classification techniques, interpretation techniques), a set of practical guidelines for developing accurate and interpretable defect models has not been well-developed.
The ultimate goal of my research aims to (1) provide practical guidelines on how to develop accurate and interpretable defect models for non-specialists; (2) develop an intelligible defect model that offer suggestions how to improve both software quality and processes; and (3) integrate defect models into a real-world practice of rapid development cycles like CI/CD settings. My research project is expected to provide significant benefits including the reduction of software defects and operating costs, while accelerating development productivity for building software systems in many of Australia's critical domains such as Smart Cities and e-Health.
Determination of Software Release Instant of Three-Tier Client Server Softwar...Waqas Tariq
Quality of any software system mainly depends on how much time testing take place, what kind of testing methodologies are used, how complex the software is, the amount of efforts put by software developers and the type of testing environment subject to the cost and time constraint. More time developers spend on testing more errors can be removed leading to better reliable software but then testing cost will also increase. On the contrary, if testing time is too short, software cost could be reduced provided the customers take risk of buying unreliable software. However, this will increase the cost during operational phase since it is more expensive to fix an error during operational phase than during testing phase. Therefore it is essentially important to decide when to stop testing and release the software to customers based on cost and reliability assessment. In this paper we present a mechanism of when to stop testing process and release the software to end-user by developing a software cost model with risk factor. Based on the proposed method we specifically address the issues of how to decide that we should stop testing and release the software based on three-tier client server architecture which would facilitates software developers to ensure on-time delivery of a software product meeting the criteria of achieving predefined level of reliability and minimizing the cost. A numerical example has been cited to illustrate the experimental results showing significant improvements over the conventional statistical models based on NHPP.
Reliability is concerned with decreasing faults and their impact. The earlier the faults are detected the better. That's why this presentation talks about automated techniques using machine learning to detect faults as early as possible.
Optimal Selection of Software Reliability Growth Model-A StudyIJEEE
People use software and sometime software fails.so they try to quantify software reliability and try to understand how and why it fails.For this purpose so many software Reliability models have been developed to estimate the defects in the software while delivering it to the customer.Till now so many software Reliability models have been developed,but main issue is that it remain largely unsolved that how to calculate software reliability efficiently.In everycircumstance we cannotuse one model because no single model can completely represent all features.This paper describes the circumstances and criteria under which particular model can be selected.
Software Reliability models have been in existence since the early 1970, over 200 have been developed. Some of the older models have been discarded based upon more recent information about the assumptions, and newer ones have replaced them.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
A metrics suite for variable categorizationt to support program invariants[IJCSEA Journal
Invariants are generally implicit. Explicitly stating program invariants, help programmers to identify
program properties that must be preserved while modifying the code. Existing dynamic techniques detect
invariants which includes both relevant and irrelevant/unused variables and thereby relevant and
irrelevant invariants involved in the program. Due to the presence of irrelevant variables and irrelevant
invariants, speed and efficiency of techniques are affected. Also, displaying properties about irrelevant
variables and irrelevant invariants distracts the user from concentrating on properties of relevant
variables. To overcome these deficiencies only relevant variables are considered by ignoring irrelevant
variables. Further, relevant variables are categorized as design variables and non-design variables. For
this purpose a metrics suite is proposed. These metrics are validated against Weyuker’s principles and
applied on RFV and JLex open source software. Similarly, relevant invariants are categorized as design
invariants, non-design invariants and hybrid invariants. For this purpose a set of rules are proposed. This
entire process enormously improves the speed and efficiency of dynamic invariant detection techniques
The reliability of a prediction model depends on the quality of the data from which it was trained. Therefore, defect prediction models may be unreliable if they are trained using noisy data. Recent research suggests that randomly-injected noise that changes the classification (label) of software modules from defective to clean (and vice versa) can impact the performance of defect models. Yet, in reality, incorrectly labelled (i.e., mislabelled) issue reports are likely non-random. In this paper, we study whether mislabelling is random, and the impact that realistic mislabelling has on the performance and interpretation of defect models. Through a case study of 3,931 manually-curated issue reports from the Apache Jackrabbit and Lucene systems, we find that: (1) issue report mislabelling is not random; (2) precision is rarely impacted by mislabelled issue reports, suggesting that practitioners can rely on the accuracy of modules labelled as defective by models that are trained using noisy data; (3) however, models trained on noisy data typically achieve 56%-68% of the recall of models trained on clean data; and (4) only the metrics in top influence rank of our defect models are robust to the noise introduced by mislabelling, suggesting that the less influential metrics of models that are trained on noisy data should not be interpreted or used to make decisions.
Specification-based Verification of Incomplete ProgramsIDES Editor
Recently, formal methods like model checking or
theorem proving have been considered efficient tools for
software verification. However, when practically applied, those
techniques suffer high complexity cost. Combining static
analysis with dynamic checking to deal with this problem has
been becoming an emerging trend, which results in the
introduction of concolic testing technique and its variations.
However, the analysis-based verification techniques always
assume the availability of full source code of the verified
program, which does not always hold in real life contexts. In
this paper, we propose an approach to tackle this problem,
where our contributed ideas are (i) combining function
specification with control flow analysis to deal with sourcemissing
function; (ii) generating self-complete programs from
incomplete programs by means of concrete execution, thus
making them fully verifiable by model checking; and (iii)
developing a constraint-based test-case generation technique
to significantly reduce the complexity. Our solution has been
proved viable when successfully deployed for checking
programming work of students.
Principles of Good Screen Design in WebsitesWaqas Tariq
Visual techniques for proper arrangement of the elements on the user screen have helped the designers to make the screen look good and attractive. Several visual techniques emphasize the arrangement and ordering of the screen elements based on particular criteria for best appearance of the screen. This paper investigates few significant visual techniques in various web user interfaces and showcases the results for better understanding and their presence.
A Method for Red Tide Detection and Discrimination of Red Tide Type (spherica...Waqas Tariq
A method for red tide detection and discrimination of red tide type (spherical and non-spherical shapes of red tide type) through polarization measurements of sea surface is proposed. There are a variety of shapes of red tide types, spherical and non-spherical. Polarization characteristics of such different shapes of red tide type are different so that discrimination can be done through polarization measurement of sea surface. Through laboratory based experiments with chattonella antiqua containing water and just water as well as chattonella marina and chattonella globossa containing water, it is confirmed that the proposed method is valid in laboratory basis. Also field experimental results, which are conducted at Ariaki Sea in Kyushu, Japan, show that the proposed method is valid.
Detecting Diagonal Activity to Quantify Harmonic Structure Preservation With ...Waqas Tariq
Matrix multiplication is widely utilized in signal and image processing. In numerous cases, it may be considered faster than conventional algorithms. Images and sounds may be presented in a multi-dimensional matrix form. An application under study is detecting diagonal activities in matrices to quantifying the amount of harmonic structure preservation of musical tones using different algorithms may be employed in cochlear implant devices. In this paper, a new matrix is proposed that is when post multiplied with another matrix; the first row of the output represents indices of fully active detected diagonals in its upper triangle. A preprocessing matrix manipulation was be mandatory. The results show that Omran matrix is powerful in this application and illustrated higher performance of one of the utilized algorithms with respect to others.
Learning of Soccer Player Agents Using a Policy Gradient Method : Coordinatio...Waqas Tariq
As an example of multi-agent learning in soccer games of the RoboCup 2D Soccer Simulation League, we dealt with a learning problem between a kicker and a receiver when a direct free kick is awarded just outside the opponent's penalty area. We propose how to use a heuristic function to evaluate an advantageous target point for safely sending/receiving a pass and scoring. The heuristics include an interaction term between a kicker and a receiver to intensify their coordination. To calculate the interaction term, we let a kicker/receiver agent have a receiver's/kicker's action decision model to predict a receiver's/kicker's action. Parameters in the heuristic function can be learned by a kind of reinforcement learning called the policy gradient method. Our experiments show that if the two agents do not have the same type of heuristics, the interaction term based on prediction of a teammate's decision model leads to learning a master-servant relation between a kicker and a receiver, where a receiver is a master and a kicker is a servant.
Cognitive Approach Towards the Maintenance of Web-Sites Through Quality Evalu...Waqas Tariq
It is a well established fact that the Web-Applications require frequent maintenance because of cutting– edge business competitions. The authors have worked on quality evaluation of web-site of Indian ecommerce domain. As a result of that work they have made a quality-wise ranking of these sites. According to their work and also the survey done by various other groups Futurebazaar web-site is considered to be one of the best Indian e-shopping sites. In this research paper the authors are assessing the maintenance of the same site by incorporating the problems incurred during this evaluation. This exercise gives a real world maintainability problem of web-sites. This work will give a clear picture of all the quality metrics which are directly or indirectly related with the maintainability of the web-site.
Trend Analysis of Onboard Calibration Data of Terra/ASTER/VNIR and One of the...Waqas Tariq
Sensitivity degradation trend is analyzed for ASTER: Advanced Spaceborne Thermal Emission and Reflection radiometer/Visible and Near-Infrared Radiometer: VNIR onboard Terra satellite. Fault Tree Analysis is made for sensitivity degradation. Firstly, it is confirmed that the VNIR detectors are stable enough through dark current and shot noise behavior analysis. Then it is also confirmed that radiance of calibration lamp equipped VNIR is stable enough through lamp monitor of photodiode output data analysis. It is confirmed that radiance at the front of VNIR optics is, on the other hand, degraded in conjunction with sensitivity degradation of VNIR through an analysis of another photodiode output data which is equipped at the front of VNIR optics, photodiode output is scale-off at around one year after the launch though. VNIR optics transparency might not be so degraded due to the fact that VNIR output and the later photodiode output show almost same degradations. Consequently, it may say that one of possible causes of VNIR sensitivity degradation is thruster plume.
A Novel Approach Concerning Wind Power EnhancementWaqas Tariq
Being a tropical country, Bangladesh does have wind flow throughout the year. However, the prospect for wind energy in Bangladesh is not at satisfactory level due to low average wind velocities at different regions of the country. The field survey data indicated that the wind velocities are relatively higher from the month of May to August, whereas, it is not so for the rest of the year. Therefore, exploiting the wind energy at low wind velocities is a major predicament in creating a sustainable energy resource for a country with inauspicious forthcoming energy crisis. The scope of this paper concentrates on an innovative approach to harness wind power by installing an auxiliary unit which would only assist the primary turbine unit in case the wind velocity falls under the required value. The auxiliary unit would comprise a secondary turbine, which would be operated by a DC motor connected to a battery system that is charged by a solar panel. A specially designed conduit would encompass both the primary and auxiliary turbine units. A CFD simulation utilizing ANSYS FLOTRAN was carried out to investigate the velocity profiles for different pressure differences at different regions of the prototype conduit. A feasibility analysis of the modified system was eventually carried out for the preferred conduit design.
Generating a Domain Specific Inspection Evaluation Method through an Adaptive...Waqas Tariq
The growth of the Internet and related technologies has enabled the development of a new breed of dynamic websites and applications that are growing rapidly in use and that have had a great impact on many businesses. These websites need to be continuously evaluated and monitored to measure their efficiency and effectiveness, to assess user satisfaction, and ultimately to improve their quality. Nearly all the studies have used Heuristic Evaluation (HE) and User Testing (UT) methodologies, which have become the accepted methods for the usability evaluation of User Interface Design (UID); however, the former is general, and unlikely to encompass all usability attributes for all website domains. The latter is expensive, time consuming and misses consistency problems. To address this need, new evaluation method is developed using traditional evaluations (HE and UT) in novel ways.
The lack of a methodological framework that can be used to generate a domain-specific evaluation method, which can then be used to improve the usability assessment process for a product in any chosen domain, represents a missing area in usability testing. This paper proposes an adapting framework and evaluates it by generating an evaluation method for assessing and improving the usability of a product, called Domain Specific Inspection (DSI), and then analysing it empirically by applying it on the educational domain. Our experiments show that the adaptive framework is able to build a formative and summative evaluation method that provides optimal results with regard to the identification of comprehensive usability problem areas and relevant usability evaluation method (UEM) metrics, with minimum input in terms of the cost and time usually spent on employing UEMs.
AudiNect: An Aid for the Autonomous Navigation of Visually Impaired People, B...Waqas Tariq
In this paper, the realization of a new kind of autonomous navigation aid is presented. The prototype, called AudiNect, is mainly developed as an aid for visually impaired people, though a larger range of applications is also possible. The AudiNect prototype is based on the Kinect device for Xbox 360. On the basis of the Kinect output data, proper acoustic feedback is generated, so that useful depth information from 3D frontal scene can be easily developed and acquired. To this purpose, a number of basic problems have been analyzed, in relation to visually impaired people orientation and movement, through both actual experimentations and a careful literature research in the field. Quite satisfactory results have been reached and discussed, on the basis of proper tests on blindfolded sighted individuals.
One of the fundamental issues in computer science is ordering a list of items. Although there is a number of sorting algorithms, sorting problem has attracted a great deal of research, because efficient sorting is important to optimize the use of other algorithms. This paper presents a new sorting algorithm which runs faster by decreasing the number of comparisons by taking some extra memory. In this algorithm we are using lists to sort the elements. This algorithm was analyzed, implemented and tested and the results are promising for a random data
Identifying the Factors Affecting Users’ Adoption of Social NetworkingWaqas Tariq
Through the rapid expansion of information and communication technologies, social networking sites have received much more attention in the scope of internet communication. Success of a social web primarily depends on users’ satisfaction. In this context, this study aims to identify the influencing factors that affect users’ satisfaction towards social networking site use. A multidimensional model has been proposed based on the Information Quality, System Quality, Environmental and Affective dimensions to assess the effects of key variables – Semantic Intention, Usability, Web-Page Aesthetics, Subjective Norm and Trust- on users’ satisfaction. Facebook was chosen as a focused social networking site, because of its popularity. A comprehensive survey instrument was applied to 203 Facebook users. Also, Structural Equation Modeling, particularly Partial Least Square, was conducted to analyze the proposed research model. As a result, proposed multidimensional research model predicts the factors influencing users’ satisfaction towards social networking site use and relationships among these factors. The findings of this research will be valuable for literature by analyzing the influencing factors that have not been previously researched in the context of social networking satisfaction area.
Toward a More Robust Usability concept with Perceived Enjoyment in the contex...Waqas Tariq
Mobile multimedia service is relatively new but has quickly dominated people¡¯s lives, especially among young people. To explain this popularity, this study applies and modifies the Technology Acceptance Model (TAM) to propose a research model and conduct an empirical study. The goal of study is to examine the role of Perceived Enjoyment (PE) and what determinants can contribute to PE in the context of using mobile multimedia service. The result indicates that PE is influencing on Perceived Usefulness (PU) and Perceived Ease of Use (PEOU) and directly Behavior Intention (BI). Aesthetics and flow are key determinants to explain Perceived Enjoyment (PE) in mobile multimedia usage.
Exploring the Relationship Between Mobile Phone and Senior Citizens: A Malays...Waqas Tariq
There is growing ageing phenomena with the rise of ageing population throughout the world. According to the World Health Organization (2002), the growing ageing population indicates 694 million, or 223% is expected for people aged 60 and over, since 1970 and 2025.The growth is especially significant in some advanced countries such as North America, Japan, Italy, Germany, United Kingdom and so forth. This growing older adult population has significantly impact the social-culture, lifestyle, healthcare system, economy, infrastructure and government policy of a nation. However, there are limited research studies on the perception and usage of a mobile phone and its service for senior citizens in a developing nation like Malaysia. This paper explores the relationship between mobile phones and senior citizens in Malaysia from the perspective of a developing country. We conducted an exploratory study using contextual interviews with 5 senior citizens of how they perceive their mobile phones. This paper reveals 4 interesting themes from this preliminary study, in addition to the findings of the desirable mobile requirements for local senior citizens with respect of health, safety and communication purposes. The findings of this study bring interesting insight to local telecommunication industries as a whole, and will also serve as groundwork for more in-depth study in the future.
Usage of Autonomy Features in USAR Human-Robot TeamsWaqas Tariq
This paper presents the results of a high-fidelity urban search and rescue (USAR) simulation at a firefighting training site. The NIFTi was system used, which consisted of a semi-autonomous ground robot, a remote-controlled flying robot, a multiview multimodal operator control unit (OCU), and a tactical-level system for mission planning. From a remote command post, firefighters could interact with the robots through the OCU and with a rescue team in person and via radio. They participated in 40-minute reconnaissance missions and showed that highly autonomous features are not easily accepted in the socio-technological context. In fact, the operators drove three times more manually than with any level of autonomy.The paper identifies several factors, such reliability, trust, and transparency that require improvement if end-users are to delegate control to the robots, irrespective of how capable the robots are in such missions.
Protocol Type Based Intrusion Detection Using RBF Neural NetworkWaqas Tariq
Intrusion detection systems (IDSs) are very important tools for providing information and computer security. In IDSs, the publicly available KDD’99, has been the most widely deployed data set used by researchers since 1999. Using a common data set has been provided to compare the results of different researches. The aim of this study is to find optimal methods of preprocessing the KDD’99 data set and employ the RBF learning algorithm to apply an Intrusion Detection System.
The Reasons social media contributed to 2011 Egyptian RevolutionWaqas Tariq
In recent years, social media has become very significant for social networking. In the past, its main use was personal, but nowadays, its becoming part of all facets of our lives, social and political. In the first quarter of 2011, the Middle East has witnessed many popular uprisings that have yet to reach an end. While these uprisings have often been termed “Facebook Revolutions” or “Twitter Revolutions”, there are many ambiguities as to the extent to which social media affected these movements. In this paper we discuss the role of social media and its impact on the 2011 Egyptian revolution. Though the reasons for the uprising were manifold, we will focus on how social media facilitated and accelerated the movement.
Evaluation of Students’ Working Postures in School WorkshopWaqas Tariq
Awkward postures are one of the major causes of musculoskeletal problems to be prevented at an early stage. Tackling this problem at the initial stage in schools would be of great importance. Tasks should be designed to avoid strain and damage to any part of the body such as the tendons, muscles, ligaments, and especially the back. Musculoskeletal disorder and back pain problems in adults was partly contributed by having such symptoms in their childhood. It is important to understand the symptoms of low back pain in children and design early interventions to prevent chronic symptoms that they may experience when they are adults. Musculoskeletal disorder and back pain problems in children and adolescent may give great implications in future workforce. The objective of this study was to compare working postures among students 13 to 15 years old while performing tasks in school workshop, therefore problems of musculoskeletal pain among students can be identified. Ergonomic assessments used for this study were the RULA and REBA methods. This cross-sectional study was conducted at a secondary school in Malaysia. Ninety-three working postures were evaluated to find out the posture risk level. Analysis result showed the average score are 4.87 and 5.87 for RULA and REBA methods respectively, which indicate medium risk and need for further action. The results also informed that 13-year old students had higher scores for both methods. Comparison using Kruskal-Wallis rank test showed there were significant differences among age groups for both scores and action levels. 13-year old students have the highest mean rank indicating bigger potential risks of awkward postures. In conclusion, both methods proved the workstation is mismatched for students’ body size especially for younger students. An ergonomic intervention is needed to improve students’ working posture, work performance and level of comfort.
Measuring maintainability; software metrics explainedDennis de Greef
In a world of ever-changing business requirements, how can you keep your software moving at the same pace?
If you keep adding lines of code around the previous iteration to add new functionality, things can become complex quite fast.
By measuring complexity, you can resolve and prevent bugs, while measuring class responsibility can make refactoring easier, for example.
In this talk Dennis will go through certain concepts of analysing software with automated tools to spit out numbers which tell a story about your code.
A DECISION SUPPORT SYSTEM TO CHOOSE OPTIMAL RELEASE CYCLE LENGTH IN INCREMENT...ijseajournal
In the last few years it has been seen that many software vendors have started delivering projects
incrementally with very short release cycles. Best examples of success of this approach has been Ubuntu
Operating system that has a 6 months release cycle and popular web browsers such as Google Chrome,
Opera, Mozilla Firefox. However there is very little knowledge available to the project managers to
validate the chosen release cycle length. We propose a decision support system that helps to validate and
estimate release cycle length in the early development phase by assuming that release cycle length is
directly affected by three factors, (i) choosing right requirements for current cycle, (ii) estimating proximal
time for each requirement, (iii) requirement wise feedback from last iteration based on product reception,
model accuracy and failed requirements. We have altered and used the EVOLVE technique proposed by G.
Ruhe to select best requirements for current cycle and map it to time domain using UCP (Use Case Points)
based estimation and feedback factors. The model has been evaluated on both in-house as well as industry
projects.
SRGM Analyzers Tool of SDLC for Software Improving QualityIJERA Editor
Software Reliability Growth Models (SRGM) have been developed to estimate software reliability measures such as
software failure rate, number of remaining faults and software reliability. In this paper, the software analyzers tool proposed
for deriving several software reliability growth models based on Enhanced Non-homogeneous Poisson Process (ENHPP) in
the presence of imperfect debugging and error generation. The proposed models are initially formulated for the case when
there is no differentiation between failure observation and fault removal testing processes and then this extended for the case
when there is a clear differentiation between failure observation and fault removal testing processes. Many Software
Reliability Growth Models (SRGM) have been developed to describe software failures as a random process and can be used
to measure the development status during testing. With SRGM software consultants can easily measure (or evaluate) the
software reliability (or quality) and plot software reliability growth charts.
A Complexity Based Regression Test Selection StrategyCSEIJJournal
Software is unequivocally the foremost and indispensable entity in this technologically driven world.
Therefore quality assurance, and in particular, software testing is a crucial step in the software
development cycle. This paper presents an effective test selection strategy that uses a Spectrum of
Complexity Metrics (SCM). Our aim in this paper is to increase the efficiency of the testing process by
significantly reducing the number of test cases without having a significant drop in test effectiveness. The
strategy makes use of a comprehensive taxonomy of complexity metrics based on the product level (class,
method, statement) and its characteristics.We use a series of experiments based on three applications with
a significant number of mutants to demonstrate the effectiveness of our selection strategy.For further
evaluation, we compareour approach to boundary value analysis. The results show the capability of our
approach to detect mutants as well as the seeded errors.
IJCER (www.ijceronline.com) International Journal of computational Engineerin...ijceronline
Call for paper 2012, hard copy of Certificate, research paper publishing, where to publish research paper,
journal publishing, how to publish research paper, Call For research paper, international journal, publishing a paper, IJCER, journal of science and technology, how to get a research paper published, publishing a paper, publishing of journal, publishing of research paper, research and review articles, IJCER Journal, How to publish your research paper, publish research paper, open access engineering journal, Engineering journal, Mathematics journal, Physics journal, Chemistry journal, Computer Engineering, Computer Science journal, how to submit your paper, peer review journal, indexed journal, research and review articles, engineering journal, www.ijceronline.com, research journals,
yahoo journals, bing journals, International Journal of Computational Engineering Research, Google journals, hard copy of Certificate,
journal of engineering, online Submission
Software Defect Trend Forecasting In Open Source Projects using A Univariate ...CSCJournals
Our objective in this research is to provide a framework that will allow project managers, business owners, and developers an effective way to forecast the trend in software defects within a software project in real-time. By providing these stakeholders with a mechanism for forecasting defects, they can then provide the necessary resources at the right time in order to remove these defects before they become too much ultimately leading to software failure. In our research, we will not only show general trends in several open-source projects but also show trends in daily, monthly, and yearly activity. Our research shows that we can use this forecasting method up to 6 months out with only an MSE of 0.019. In this paper, we present our technique and methodologies for developing the inputs for the proposed model and the results of testing on seven open source projects. Further, we discuss the prediction models, the performance, and the implementation using the FBProphet framework and the ARIMA model.
This paper describes the different techniques of testing the software. This paper explicitly addresses the idea for testability and the important thing is that the testing itself-not just by saying that testability is a desirable goal, but by showing how to do it. Software testing is the process we used to measure the quality of developed software. Software Testing is not just about error-finding and their solution but also about checking the client requirements and testing that those requirements are met by the software solution. It is the most important functional phase in the Software Development Life Cycle(SDLC) as it exhibits all mistakes, flaws and errors in the developed software. Without finding these errors, technically termed as ‘bugs,’ software development is not considered to be complete. Hence, software testing becomes an important parameter for assuring quality of the software product. We discuss here about when to start and when to stop the testing of software. How errors or Bugs are formed and rectified. How software testing is done i.e. with the help of Team Work.
From previous year researches, it is concluded that testing is playing a vital role in the development of the software product. As, software testing is a single approach to assure the quality of the software so most of the development efforts are put on the software testing. But software testing is an expensive process and consumes a lot of time. So, testing should be start as early as possible in the development to control the money and time problems. Even, testing should be performed at every step in the software development life cycle (SDLC) which is a structured approach used in the development of the software product. Software testing is a tradeoff between budget, time and quality. Now a day, testing becomes a very important activity in terms of exposure, security, performance and usability. Hence, software testing faces a collection of challenges.
EVALUATION OF SOFTWARE DEGRADATION AND FORECASTING FUTURE DEVELOPMENT NEEDS I...ijseajournal
This article is an extended version of a previously published conference paper. In this research, JHotDraw (JHD), a well-tested and widely used open source Java-based graphics framework developed with the best software engineering practice was selected as a test suite. Six versions of this software were profiled, and data collected dynamically, from which four metrics namely (1) entropy (2) software maturity index, COCOMO effort and duration metrics were used to analyze software degradation, maturity level and use
the obtained results as input to time series analysis in order to predict effort and duration period that may
be needed for the development of future versions. The novel idea is that, historical evolution data is used to
project, predict and forecast resource requirements for future developments. The technique presented in
this paper will empower software development decision makers with a viable tool for planning and decision
making.
The Use of Java Swing’s Components to Develop a WidgetWaqas Tariq
Widget is a kind of application provides a single service such as a map, news feed, simple clock, battery-life indicators, etc. This kind of interactive software object has been developed to facilitate user interface (UI) design. A user interface (UI) function may be implemented using different widgets with the same function. In this article, we present the widget as a platform that is generally used in various applications, such as in desktop, web browser, and mobile phone. We also describe a visual menu of Java Swing’s components that will be used to establish widget. It will assume that we have successfully compiled and run a program that uses Swing components.
3D Human Hand Posture Reconstruction Using a Single 2D ImageWaqas Tariq
Passive sensing of the 3D geometric posture of the human hand has been studied extensively over the past decade. However, these research efforts have been hampered by the computational complexity caused by inverse kinematics and 3D reconstruction. In this paper, our objective focuses on 3D hand posture estimation based on a single 2D image with aim of robotic applications. We introduce the human hand model with 27 degrees of freedom (DOFs) and analyze some of its constraints to reduce the DOFs without any significant degradation of performance. A novel algorithm to estimate the 3D hand posture from eight 2D projected feature points is proposed. Experimental results using real images confirm that our algorithm gives good estimates of the 3D hand pose. Keywords: 3D hand posture estimation; Model-based approach; Gesture recognition; human- computer interface; machine vision.
Camera as Mouse and Keyboard for Handicap Person with Troubleshooting Ability...Waqas Tariq
Camera mouse has been widely used for handicap person to interact with computer. The utmost important of the use of camera mouse is must be able to replace all roles of typical mouse and keyboard. It must be able to provide all mouse click events and keyboard functions (include all shortcut keys) when it is used by handicap person. Also, the use of camera mouse must allow users troubleshooting by themselves. Moreover, it must be able to eliminate neck fatigue effect when it is used during long period. In this paper, we propose camera mouse system with timer as left click event and blinking as right click event. Also, we modify original screen keyboard layout by add two additional buttons (button “drag/ drop” is used to do drag and drop of mouse events and another button is used to call task manager (for troubleshooting)) and change behavior of CTRL, ALT, SHIFT, and CAPS LOCK keys in order to provide shortcut keys of keyboard. Also, we develop recovery method which allows users go from camera and then come back again in order to eliminate neck fatigue effect. The experiments which involve several users have been done in our laboratory. The results show that the use of our camera mouse able to allow users do typing, left and right click events, drag and drop events, and troubleshooting without hand. By implement this system, handicap person can use computer more comfortable and reduce the dryness of eyes.
A Proposed Web Accessibility Framework for the Arab DisabledWaqas Tariq
The Web is providing unprecedented access to information and interaction for people with disabilities. This paper presents a Web accessibility framework which offers the ease of the Web accessing for the disabled Arab users and facilitates their lifelong learning as well. The proposed framework system provides the disabled Arab user with an easy means of access using their mother language so they don’t have to overcome the barrier of learning the target-spoken language. This framework is based on analyzing the web page meta-language, extracting its content and reformulating it in a suitable format for the disabled users. The basic objective of this framework is supporting the equal rights of the Arab disabled people for their access to the education and training with non disabled people. Key Words : Arabic Moon code, Arabic Sign Language, Deaf, Deaf-blind, E-learning Interactivity, Moon code, Web accessibility , Web framework , Web System, WWW.
Real Time Blinking Detection Based on Gabor FilterWaqas Tariq
New method of blinking detection is proposed. The utmost important of blinking detections method is robust against different users, noise, and also change of eye shape. In this paper, we propose blinking detections method by measuring of distance between two arcs of eye (upper part and lower part). We detect eye arcs by apply Gabor filter onto eye image. As we know that Gabor filter has advantage on image processing application since it able to extract spatial localized spectral features, such line, arch, and other shape are more easily detected. After two of eye arcs are detected, we measure the distance between both by using connected labeling method. The open eye is marked by the distance between two arcs is more than threshold and otherwise, the closed eye is marked by the distance less than threshold. The experiment result shows that our proposed method robust enough against different users, noise, and eye shape changes with perfectly accuracy.
Computer Input with Human Eyes-Only Using Two Purkinje Images Which Works in ...Waqas Tariq
A method for computer input with human eyes-only using two Purkinje images which works in a real time basis without calibration is proposed. Experimental results shows that cornea curvature can be estimated by using two light sources derived Purkinje images so that no calibration for reducing person-to-person difference of cornea curvature. It is found that the proposed system allows usersf movements of 30 degrees in roll direction and 15 degrees in pitch direction utilizing detected face attitude which is derived from the face plane consisting three feature points on the face, two eyes and nose or mouth. Also it is found that the proposed system does work in a real time basis.
Collaborative Learning of Organisational KnolwedgeWaqas Tariq
This paper presents recent research into methods used in Australian Indigenous Knowledge sharing and looks at how these can support the creation of suitable collaborative envi- ronments for timely organisational learning. The protocols and practices as used today and in the past by Indigenous communities are presented and discussed in relation to their relevance to a personalised system of knowledge sharing in modern organisational cultures. This research focuses on user models, knowledge acquisition and integration of data for constructivist learning in a networked repository of or- ganisational knowledge. The data collected in the repository is searched to provide collections of up-to-date and relevant material for training in a work environment. The aim is to improve knowledge collection and sharing in a team envi- ronment. This knowledge can then be collated into a story or workflow that represents the present knowledge in the organisation.
Our research aims to propose a global approach for specification, design and verification of context awareness Human Computer Interface (HCI). This is a Model Based Design approach (MBD). This methodology describes the ubiquitous environment by ontologies. OWL is the standard used for this purpose. The specification and modeling of Human-Computer Interaction are based on Petri nets (PN). This raises the question of representation of Petri nets with XML. We use for this purpose, the standard of modeling PNML. In this paper, we propose an extension of this standard for specification, generation and verification of HCI. This extension is a methodological approach for the construction of PNML with Petri nets. The design principle uses the concept of composition of elementary structures of Petri nets as PNML Modular. The objective is to obtain a valid interface through verification of properties of elementary Petri nets represented with PNML.
Development of Sign Signal Translation System Based on Altera’s FPGA DE2 BoardWaqas Tariq
The main aim of this paper is to build a system that is capable of detecting and recognizing the hand gesture in an image captured by using a camera. The system is built based on Altera’s FPGA DE2 board, which contains a Nios II soft core processor. Image processing techniques and a simple but effective algorithm are implemented to achieve this purpose. Image processing techniques are used to smooth the image in order to ease the subsequent processes in translating the hand sign signal. The algorithm is built for translating the numerical hand sign signal and the result are displayed on the seven segment display. Altera’s Quartus II, SOPC Builder and Nios II EDS software are used to construct the system. By using SOPC Builder, the related components on the DE2 board can be interconnected easily and orderly compared to traditional method that requires lengthy source code and time consuming. Quartus II is used to compile and download the design to the DE2 board. Then, under Nios II EDS, C programming language is used to code the hand sign translation algorithm. Being able to recognize the hand sign signal from images can helps human in controlling a robot and other applications which require only a simple set of instructions provided a CMOS sensor is included in the system.
An overview on Advanced Research Works on Brain-Computer InterfaceWaqas Tariq
A brain–computer interface (BCI) is a proficient result in the research field of human- computer synergy, where direct articulation between brain and an external device occurs resulting in augmenting, assisting and repairing human cognitive. Advanced works like generating brain-computer interface switch technologies for intermittent (or asynchronous) control in natural environments or developing brain-computer interface by Fuzzy logic Systems or by implementing wavelet theory to drive its efficacies are still going on and some useful results has also been found out. The requirements to develop this brain machine interface is also growing day by day i.e. like neuropsychological rehabilitation, emotion control, etc. An overview on the control theory and some advanced works on the field of brain machine interface are shown in this paper.
Virtual teams are used more and more by companies and other organizations to receive benefits. They are a great way to enable teamwork in situations where people are not sitting in the same physical place at the same time. As companies seek to increase the use of virtual teams, a need exists to explore the context of these teams, the virtuality of a team and software that may help these teams working virtualy. Virtual teams have the same basic principles as traditional teams, but there is one big difference. This difference is the way the team members communicate. Instead of using the dynamics of in-office face-to-face exchange, they now rely on special communication channels enabled by modern technologies, such as e-mails, faxes, phone calls and teleconferences, virtual meetings etc. This is why this paper is focused on the issues regarding virtual teams, and how these teams are created and progressing in Albania.
USEFul: A Framework to Mainstream Web Site Usability through Automated Evalua...Waqas Tariq
A paradox has been observed whereby web site usability is proven to be an essential element in a web site, yet at the same time there exist an abundance of web pages with poor usability. This discrepancy is the result of limitations that are currently preventing web developers in the commercial sector from producing usable web sites. In this paper we propose a framework whose objective is to alleviate this problem by automating certain aspects of the usability evaluation process. Mainstreaming comes as a result of automation, therefore enabling a non-expert in the field of usability to conduct the evaluation. This results in reducing the costs associated with such evaluation. Additionally, the framework allows the flexibility of adding, modifying or deleting guidelines without altering the code that references them since the guidelines and the code are two separate components. A comparison of the evaluation results carried out using the framework against published evaluations of web sites carried out by web site usability professionals reveals that the framework is able to automatically identify the majority of usability violations. Due to the consistency with which it evaluates, it identified additional guideline-related violations that were not identified by the human evaluators.
Robot Arm Utilized Having Meal Support System Based on Computer Input by Huma...Waqas Tariq
A robot arm utilized having meal support system based on computer input by human eyes only is proposed. The proposed system is developed for handicap/disabled persons as well as elderly persons and tested with able persons with several shapes and size of eyes under a variety of illumination conditions. The test results with normal persons show the proposed system does work well for selection of the desired foods and for retrieve the foods as appropriate as usersf requirements. It is found that the proposed system is 21% much faster than the manually controlled robotics.
Dynamic Construction of Telugu Speech Corpus for Voice Enabled Text EditorWaqas Tariq
In recent decades speech interactive systems have gained increasing importance. Performance of an ASR system mainly depends on the availability of large corpus of speech. The conventional method of building a large vocabulary speech recognizer for any language uses a top-down approach to speech. This approach requires large speech corpus with sentence or phoneme level transcription of the speech utterances. The transcriptions must also include different speech order so that the recognizer can build models for all the sounds present. But, for Telugu language, because of its complex nature, a very large, well annotated speech database is very difficult to build. It is very difficult, if not impossible, to cover all the words of any Indian language, where each word may have thousands and millions of word forms. A significant part of grammar that is handled by syntax in English (and other similar languages) is handled within morphology in Telugu. Phrases including several words (that is, tokens) in English would be mapped on to a single word in Telugu.Telugu language is phonetic in nature in addition to rich in morphology. That is why the speech technology developed for English cannot be applied to Telugu language. This paper highlights the work carried out in an attempt to build a voice enabled text editor with capability of automatic term suggestion. Main claim of the paper is the recognition enhancement process developed by us for suitability of highly inflecting, rich morphological languages. This method results in increased speech recognition accuracy with very much reduction in corpus size. It also adapts Telugu words to the database dynamically, resulting in growth of the corpus.
An Improved Approach for Word Ambiguity RemovalWaqas Tariq
Word ambiguity removal is a task of removing ambiguity from a word, i.e. correct sense of word is identified from ambiguous sentences. This paper describes a model that uses Part of Speech tagger and three categories for word sense disambiguation (WSD). Human Computer Interaction is very needful to improve interactions between users and computers. For this, the Supervised and Unsupervised methods are combined. The WSD algorithm is used to find the efficient and accurate sense of a word based on domain information. The accuracy of this work is evaluated with the aim of finding best suitable domain of word. Keywords: Human Computer Interaction, Supervised Training, Unsupervised Learning, Word Ambiguity, Word sense disambiguation
Parameters Optimization for Improving ASR Performance in Adverse Real World N...Waqas Tariq
From the existing research it has been observed that many techniques and methodologies are available for performing every step of Automatic Speech Recognition (ASR) system, but the performance (Minimization of Word Error Recognition-WER and Maximization of Word Accuracy Rate- WAR) of the methodology is not dependent on the only technique applied in that method. The research work indicates that, performance mainly depends on the category of the noise, the level of the noise and the variable size of the window, frame, frame overlap etc is considered in the existing methods. The main aim of the work presented in this paper is to use variable size of parameters like window size, frame size and frame overlap percentage to observe the performance of algorithms for various categories of noise with different levels and also train the system for all size of parameters and category of real world noisy environment to improve the performance of the speech recognition system. This paper presents the results of Signal-to-Noise Ratio (SNR) and Accuracy test by applying variable size of parameters. It is observed that, it is really very hard to evaluate test results and decide parameter size for ASR performance improvement for its resultant optimization. Hence, this study further suggests the feasible and optimum parameter size using Fuzzy Inference System (FIS) for enhancing resultant accuracy in adverse real world noisy environmental conditions. This work will be helpful to give discriminative training of ubiquitous ASR system for better Human Computer Interaction (HCI). Keywords: ASR Performance, ASR Parameters Optimization, Multi-Environmental Training, Fuzzy Inference System for ASR, ubiquitous ASR system, Human Computer Interaction (HCI)
Interface on Usability Testing Indonesia Official Tourism WebsiteWaqas Tariq
Ministry of Tourism and Creative Economy of The Republic of Indonesia must meet the wide audience various needs and should reach people from all levels of society around the world to provide Indonesia tourism and travel information. This article will gives the details in the evolution of one important component of Indonesia Official Tourism Website as it has grown in functionality and usefulness over several years of use by a live, unrestricted community. We chose this website to see the website interface design and usability and to popularize Indonesia tourism and travel highlights. The analysis done by looking at the criteria specified for usability testing. Usability testing measures are the ease of use (effectiveness, efficiency, consistency and interface design), easy to learn, errors and syntax which is related to the human computer interaction. The purpose of this article is to test the usability level of the website, analyze the website interface design, and provide suggestions for improvements in Indonesia Official Tourism Website of analysis we have done before.
Monitoring and Visualisation Approach for Collaboration Production Line Envir...Waqas Tariq
In this paper, a tool, called SPMonitor, to monitor and visualize of run-time execution productive processes is proposed. SPMonitor enables dynamically visualizing and monitoring workflows running in a system. It displays versatile information about currently executed workflows providing better understanding about processes and the general functionality of the domain. Moreover, SPMonitor enhances cooperation between different stakeholders by offering extensive communication and problem solving features that allow actors concerned to react more efficiently to different anomalies that may occur during a workflow execution. The ideas discussed are validated through the study of real case related to airbus assembly lines.
Hand Segmentation Techniques to Hand Gesture Recognition for Natural Human Co...Waqas Tariq
This work is the part of vision based hand gesture recognition system for Natural Human Computer Interface. Hand tracking and segmentation are the primary steps for any hand gesture recognition system. The aim of this paper is to develop robust and efficient hand segmentation algorithm where three algorithms for hand segmentation using different color spaces with required morphological processing have were utilized. Hand tracking and segmentation algorithm (HTS) is found to be most efficient to handle the challenges of vision based system such as skin color detection, complex background removal and variable lighting condition. Noise may contain, sometime, in the segmented image due to dynamic background. An edge traversal algorithm was developed and applied on the segmented hand contour for removal of unwanted background noise.
Vision Based Gesture Recognition Using Neural Networks Approaches: A ReviewWaqas Tariq
The aim of gesture recognition researches is to create system that easily identifies gestures, and use them for device control, or convey in formations. In this paper we are discussing researches done in the area of hand gesture recognition based on Artificial Neural Networks approaches. Several hand gesture recognition researches that use Neural Networks are discussed in this paper, comparisons between these methods were presented, advantages and drawbacks of the discussed methods also included, and implementation tools for each method were presented as well.
Ethnobotany and Ethnopharmacology:
Ethnobotany in herbal drug evaluation,
Impact of Ethnobotany in traditional medicine,
New development in herbals,
Bio-prospecting tools for drug discovery,
Role of Ethnopharmacology in drug evaluation,
Reverse Pharmacology.
Operation “Blue Star” is the only event in the history of Independent India where the state went into war with its own people. Even after about 40 years it is not clear if it was culmination of states anger over people of the region, a political game of power or start of dictatorial chapter in the democratic setup.
The people of Punjab felt alienated from main stream due to denial of their just demands during a long democratic struggle since independence. As it happen all over the word, it led to militant struggle with great loss of lives of military, police and civilian personnel. Killing of Indira Gandhi and massacre of innocent Sikhs in Delhi and other India cities was also associated with this movement.
How to Create Map Views in the Odoo 17 ERPCeline George
The map views are useful for providing a geographical representation of data. They allow users to visualize and analyze the data in a more intuitive manner.
Synthetic Fiber Construction in lab .pptxPavel ( NSTU)
Synthetic fiber production is a fascinating and complex field that blends chemistry, engineering, and environmental science. By understanding these aspects, students can gain a comprehensive view of synthetic fiber production, its impact on society and the environment, and the potential for future innovations. Synthetic fibers play a crucial role in modern society, impacting various aspects of daily life, industry, and the environment. ynthetic fibers are integral to modern life, offering a range of benefits from cost-effectiveness and versatility to innovative applications and performance characteristics. While they pose environmental challenges, ongoing research and development aim to create more sustainable and eco-friendly alternatives. Understanding the importance of synthetic fibers helps in appreciating their role in the economy, industry, and daily life, while also emphasizing the need for sustainable practices and innovation.
We all have good and bad thoughts from time to time and situation to situation. We are bombarded daily with spiraling thoughts(both negative and positive) creating all-consuming feel , making us difficult to manage with associated suffering. Good thoughts are like our Mob Signal (Positive thought) amidst noise(negative thought) in the atmosphere. Negative thoughts like noise outweigh positive thoughts. These thoughts often create unwanted confusion, trouble, stress and frustration in our mind as well as chaos in our physical world. Negative thoughts are also known as “distorted thinking”.
Students, digital devices and success - Andreas Schleicher - 27 May 2024..pptxEduSkills OECD
Andreas Schleicher presents at the OECD webinar ‘Digital devices in schools: detrimental distraction or secret to success?’ on 27 May 2024. The presentation was based on findings from PISA 2022 results and the webinar helped launch the PISA in Focus ‘Managing screen time: How to protect and equip students against distraction’ https://www.oecd-ilibrary.org/education/managing-screen-time_7c225af4-en and the OECD Education Policy Perspective ‘Students, digital devices and success’ can be found here - https://oe.cd/il/5yV
Palestine last event orientationfvgnh .pptxRaedMohamed3
An EFL lesson about the current events in Palestine. It is intended to be for intermediate students who wish to increase their listening skills through a short lesson in power point.
A Simplified Model for Evaluating Software Reliability at the Developmental Stage
1. Shelbi Joseph, Shouri P.V & Jagathy Raj V. P
International Journal Software Engineering (IJSE), Volume (1): Issue (5) 125
A Simplified Model for Evaluating Software Reliability at the
Developmental Stage
Shelbi Joseph achayanshelbil@gmail.com
Division of Information Technology
School of Engineering
Cochin University of Science and Technology
Cochin, India
Shouri P.V pvshouri@gmail.com
Department of Mechanical Engineering
Model Engineering College
Cochin, India
Jagathy Raj V. P jagathy@cusat.ac.in
School of Management Studies,
Cochin University of Science and Technology
Cochin, India
Abstract
The use of open source software is becoming more and more predominant and it is important
that the reliability of this software are evaluated. Even though a lot of researchers have tried
to establish the failure pattern of different packages a deterministic model for evaluating
reliability is not yet developed. The present work details a simplified model for evaluating the
reliability of the open source software based on the available failure data. The methodology
involves identifying a fixed number of packages at the start of the time and defining the failure
rate based on the failure data for these preset number of packages. The defined function of
the failure rate is used to arrive at the reliability model. The reliability values obtained using
the developed model are also compared with the exact reliability values.
Key words: Bugs, Failure Density, Failure Rate, Open Source Software, Reliability
1. INTRODUCTION
Open Source Software (OSS) has attracted significant attention in recent years [1]. It is being accepted as a
viable alternative to commercial software [2]. OSS in general refers to any software whose source code is freely
available for distribution [3]. However the OSS development approach is still not fully understood [4]. Reliability
estimation plays a vital role during the developmental phase of the open source software. In fact, once the
package has stabilized (or developed) then chances of further failure are relatively low and package will be more
or less reliable. However, during the developmental stage failures or bug arrival are more frequent and it is
important that a model has to be developed to evaluate the reliability during this period. The bug arrivals usually
peak at the code inspection phase and get rather stabilized in the system test phase [5]. Software reliability
evaluation is an increasingly important aspect of software development process [6].
Reliability can be defined as the probability of failure free operation of a computer program in a specified
environment for a specified period of time [4,5]. It is evident from the definition that there are four key elements
associated with the reliability namely element of probability, function of the product, environmental conditions,
and time.
Reliability is nothing but the probability of success. As success and failure are complementary, a measure of the
failure is essential to arrive at the reliability. That is,
2. Shelbi Joseph, Shouri P.V & Jagathy Raj V. P
International Journal Software Engineering (IJSE), Volume (1): Issue (5) 126
Reliability = Probability of success = 1- Probability of failure (1)
From equation (1), it is evident that the first step in reliability analysis is failure data analysis. This involves fixing
up a time interval and noting down the failures at different time intervals. The number of packages at the start of
the analysis is defined as the initial population and the survivors at any point of time is the difference of initial
population and the failures that have occurred till this point. Failure rate associated with a time interval can be
defined as the ratio of number of bugs reported during the
NOMENCLATURE
fd (t) failure density
N initial population
R(t) reliability
t time
Z(t) failure rate
λ constant failure rate
given time interval to the average population associated with the time interval. Once the variation of failure rate
with respect to time can be established an equation can be used to fit the variation which will be the failure
model for reliability estimation. Typical reliability models include Jelinski-Moranda [6], Littlewood [7] , Goel-
Okumoto [8], Nelson model [9], Mills model [10],Basin model [10],Halstead model [11] and Musa-Okumoto [4].
For software projects that have not been in operation long enough, the failure data collected may not be
sufficient to provide a decent picture of software quality, which may lead to anomalous reliability estimates [12,
13] Weibull function is also used for reliability analysis and the function has been particularly valuable for
situations for which the data samples are relatively small [14].
Concern about software reliability has been around for a long time [15,16 ] and as open source is a relatively
novel software development approach differing significantly from proprietary software waterfall model, we do not
yet have any mature or stable technique to assess open source software reliability [17].
It is clear from the above discussions that even though a variety of models are available for reliability prediction,
a deterministic model is presently not available. Or in other words, none of these models quantifies reliability.
The present work focuses on development of an algorithm and there by a simplified method of quantifying
reliability of a software.
2. MODEL DEVELOPMENT AND ALGORITHM
An open source program typically consists of multiple modules [18]. Attributes of the reliability models have been
usually defined with respect to time with four general ways to characterize [19, 20] reliability, time of failure, time
interval between failures, cumulative number of faults upto a period of time and failure found in a time interval.
The present methodology involves defining an equation for the pattern of failure based on the available bug
arrival rate and developing a generalized model for the reliability of the software. The following are the
assumptions involved in the analysis.
1. The software analyzed is an open source.
2. As the open source software is made up of a very large community the environmental changes are not
considered.
3. The total number of packages at the beginning of the analysis is assumed to remain constant and is
taken as the initial population.
4. The failures of various packages are assumed to be independent of each other.
5. The model is developed for evaluation of the software reliability at the developmental stage and the
packages that fail during this period are not further considered. It is further assumed that by the end of
3. Shelbi Joseph, Shouri P.V & Jagathy Raj V. P
International Journal Software Engineering (IJSE), Volume (1): Issue (5) 127
developmental stage the bug associated with the failed packages would be eliminated and will be stable
further.
6. The reliability of the software is inversely proportional to the number of bugs reported at any point of
time.
7. The beginning of the time period after which the bug arrival or failure rate remains constant marks the
culmination of the developmental stage and the software will be stabilize.
Based on the above assumptions a 6- step algorithm is developed for the analysis as detailed below.
1. Indentify the total initial population. This corresponds to the total number of packages existing at the
beginning of the time period. That is, at the start of analysis.
2. Define a time period and find out the bugs reported during this time interval. As the failure would have
occurred anywhere between the time interval, the reported failures are indicated in between the time
interval.
3. Calculate the cumulative failures and thereby the survivors and different points in time.
4. Estimate the failure rate associated with the time intervals by dividing the number of failures associated
with the given unit time interval by average population associated with the time interval. Average
population associated with a given time interval is the average of survivors at the beginning and end of
the time period.
5. Plot the graphs defining the relation between failure rate and time and obtain the equation defining the
relation between failure rate and time.
6. Obtain the expression for reliability of the software by substituting the equation of failure rate in
equation(1) given as
∫−
=
t
dttZ
etR 0
)(
)( (1)
3. RESULTS AND DISCUSSION
A total of 1880 packages were available at the start of the analysis as per the details available from the
official website of Debian [21]. This is taken as the initial population. A time interval of 1 month is fixed and
the bug arrival rate during this interval is noted. The reported errors at different time intervals are given in the
Table 1. The observations are taken for 1 year after which the bug arrival is negligible indicating that the
software has more or less stabilized.
TABLE 1: Failure Data Analysis
4. Shelbi Joseph, Shouri P.V & Jagathy Raj V. P
International Journal Software Engineering (IJSE), Volume (1): Issue (5) 128
F(t) = - 0.0004 t + 0.078
0
0.05
0.1
0.15
0.2
0.25
1 2 3 4 5 6 7 8 9 10
Time (months)
Failurerate
FIGURE 1. Variation of failure rate with time
The variation of failure rate with respect to time is shown in Fig. 1. It can be seen that after the 8
th
month
onwards the software has somewhat stabilized indicating the completion of developmental phase. The
failure model corresponding to the failure rate can be expressed by the equation (2)
078.00004.0)( +−= ttZ (2)
The corresponding reliability can be expressed by the equation (3) as
dtt
t
etR
)078.00004.0(
0
)(
∫ +−−
=
That is,
t
t
etR
078.0
2
0004.0 2
)(
−
= (3)
Failure density associated with the time intervals is the ratio of number of failures associated with the given
unit time interval to the initial population. Failure density can be related with Reliability and failure rate using
the equation (4) as
)()()( tZtRtfd ×= (4)
Therefore, based on the developed model failure density can be expressed as
)078.00004.0()(
078.0
2
0004.0 2
+−×=
−
tetf
t
t
d (5)
The reliability of the software at different points in time is calculated using the equation (3). The actual
values of reliability obtained by dividing the survivors at the given point in time by the initial population are
also calculated. The Musa model assumes a constant value for the failure rate and by considering this as
the average value of failure rates the reliability values are calculated using the equation
t
etR λ−
=)( (6)
5. Shelbi Joseph, Shouri P.V & Jagathy Raj V. P
International Journal Software Engineering (IJSE), Volume (1): Issue (5) 129
The reliability values calculated using the three different methods and the failure density values are shown in
table 2.
TABLE2. Reliability and failure density
Fig. 2 shows a comparison of reliability obtained using the developed simplified model and Musa model with
the actual reliability values. It can be seen that the simplified model and the Musa model nearly provides the
same results. Further, these two models very closely approximate the real situation. The variation of failure
density with time is also shown in Fig. 3
0
0.2
0.4
0.6
0.8
1
1.2
0 1 2 3 4 5 6 7 8 9 10
Time (months)
Reliability
Reliability (Model)
Reliability (Actual)
Reliability (Musa)
FIGURE2. Comparison of reliability obtained using different models
6. Shelbi Joseph, Shouri P.V & Jagathy Raj V. P
International Journal Software Engineering (IJSE), Volume (1): Issue (5) 130
0
0.02
0.04
0.06
0.08
0.1
0.12
0.14
0.16
0.18
0.2
1 2 3 4 5 6 7 8 9 10
Time (months)
Failuredensity
FIGURE 3. Variation of Failure density with time
Fig 4 compares the reliability value obtained using the model with the theoretical value. It can be seen that the
percentage error is always within 10% of the actual value which is a reasonably good result for all engineering problems.
-15
-10
-5
0
5
10
0 1 2 3 4 5 6 7 8 9 10
Time (months)
%Error
FIGURE 4. Error Analysis
4. CONCLUSION
A simplified model for evaluation of software reliability was presented. This is a relatively simplified and a
totally new method of analysis of software reliability as the initial population is assumed to remain constant.
The method provides fairly good results and the related errors are negligible. It is hoped that this model will
prove to be a powerful tool for software reliability analysis.
5. REFERENCES
1. Ying ZHOU,Joseph Davis. Open Source Software reliability model: an empirical approach, ACM 2005
2. Sharifah Mashita Syed-Mohamad,Tom McBride. A comparison of the Reliability Growth of Open Source
and In-House Software. 2008 IEEE 15th Asia-Pacific Software Engineering Conference
7. Shelbi Joseph, Shouri P.V & Jagathy Raj V. P
International Journal Software Engineering (IJSE), Volume (1): Issue (5) 131
3. Cobra Rahmani,Harvey Siy, Azad Azadmanesh. An Experimental Analysis of Open Source Software
Reliability. Department of Defense/Air Force Office of Scientific Research
4. Lars M.Karg, Michael Grottke, Arne Beckhausa . Conformance Quality and Failure Costs in the Software
Industry: An Empirical Analysis of Open Source Software. 2009 IEEE
5. Kan H.S. Metrics and models in software quality engineering, 2nd edition, Addison-Wesley(2003)
6. S.P. Leblanc, P.A.Roman Reliability Estimation of Hierarchical Software Systems. 2002 Proceedings
annual reliability and maintainability symposium
7. J D Musa and K. Okumoto. A logarithamic Poisson execution time model for software reliability
measurement 7th international conference on Software Engineering(ICSE),1984, pp. 230-238
8. H. Pham, Software Reliability Spriner-Veriag , 2000
9. E.C. Nelson, A Stastistical Basis for Software Reliability Assessment, TRW-SS-73-03. 1973.
10. ShaoPing Wang, Software Engineering. BEIJING BUAA PRESS.
11. M.H.Halstead, Elements of Software Science, North Holand, 1977.
12. Z. Jelinski and P.B. Moranda, Software Reliability research, in Stastistical Computer Performance
Evaluation, W. Freiberger, Ed. New York: Academic Press, 1972, pp.465-484
13. B. Littlewood and J.L. Verrall, A Bayesian reliability growth model for computer software, Applied
Stastistics, Vol 22, 1973, pp 332-346
14. A. L. Goel and K. Okumoto, A time-dependent error-detection rate model for software reliability and
other performance measure, IEEE Transactions on Reliability, vol. R-28, 1979, pp. 206-211.
15. Adalberto Nobiato Crespo, Alberto Pasquini ,”Applying Code Coverage Approach to an Infinite Failure
Software Reliabilit Model” , 2009 XXIII Brazilian Symposium on Software Engineering, 2009 IEEE.
16. Hudson, A, “Program Error as a British and Death Process”, Technical Report SP- 3011, Santa Monica,
Cal,:Systems development Corporation, 1967.
17. Fenghong Zou , Joseph Davis “Analysing and Modeling Open Source Software Bug Report Data’, 19th
Australian Confeence on Software Engineering.,2008 IEEE.
18. Fenghong Zou , Joseph Davis “ A Model of Bug Dynamics for Open Source ”, The second International
Conference on Secure System Integration and Reliability Improvement., 2008 IEEE.
19. Sharifah Mashita Syed-Mohamad, Tom McBride, ‘Reliability Growth of Open Source Software using
Defect Analysis’, 2008 International conference on Computer Science and Software Engineering, 2008
IEEE.
20. Musa, J.D., Iannino, A. and Okumoto, K. (1987),”Software Reliability: Measurement, Prediction,
Application’, pp. 621.
21. http://www.debian.orgs