Quality of any software system mainly depends on how much time testing take place, what kind of testing methodologies are used, how complex the software is, the amount of efforts put by software developers and the type of testing environment subject to the cost and time constraint. More time developers spend on testing more errors can be removed leading to better reliable software but then testing cost will also increase. On the contrary, if testing time is too short, software cost could be reduced provided the customers take risk of buying unreliable software. However, this will increase the cost during operational phase since it is more expensive to fix an error during operational phase than during testing phase. Therefore it is essentially important to decide when to stop testing and release the software to customers based on cost and reliability assessment. In this paper we present a mechanism of when to stop testing process and release the software to end-user by developing a software cost model with risk factor. Based on the proposed method we specifically address the issues of how to decide that we should stop testing and release the software based on three-tier client server architecture which would facilitates software developers to ensure on-time delivery of a software product meeting the criteria of achieving predefined level of reliability and minimizing the cost. A numerical example has been cited to illustrate the experimental results showing significant improvements over the conventional statistical models based on NHPP.
International Journal of Computational Engineering Research(IJCER) is an intentional online Journal in English monthly publishing journal. This Journal publish original research work that contributes significantly to further the scientific knowledge in engineering and Technology
AN APPROACH FOR TEST CASE PRIORITIZATION BASED UPON VARYING REQUIREMENTS IJCSEA Journal
Software testing is a process continuously performed by the development team during the life cycle of the software with the motive to detect the faults as early as possible. Regressing testing is the most suitable technique for this in which we test number of test cases. As the number of test cases can be very large it is always preferable to prioritize test cases based upon certain criterions.In this paper prioritization strategy is proposed which prioritize test cases based on requirements analysis. By regressing testing if the requirements will vary in future, the software will be modified in such a manner that it will not affect the remaining parts of the software. The proposed system improves the testing process and its efficiency to achieve goals regarding quality, cost, and effort as well user satisfaction and the result of the proposed method evaluated with the help of performance evaluation metric.
One of the core quality assurance feature which combines fault prevention and fault detection, is often known as testability approach also. There are many assessment techniques and quantification method evolved for software testability prediction which actually identifies testability weakness or factors to further help reduce test effort. This paper examines all those measurement techniques that are being proposed for software testability assessment at various phases of object oriented software development life cycle. The aim is to find the best metrics suit for software quality improvisation through software testability support. The ultimate objective is to establish the ground work for finding ways reduce the testing effort by improvising software testability and its assessment using well planned guidelines for object-oriented software development with the help of suitable metrics.
Software Quality Analysis Using Mutation Testing SchemeEditor IJMTER
The software test coverage is used measure the safety measures. The safety critical analysis is
carried out for the source code designed in Java language. Testing provides a primary means for
assuring software in safety-critical systems. To demonstrate, particularly to a certification authority, that
sufficient testing has been performed, it is necessary to achieve the test coverage levels recommended or
mandated by safety standards and industry guidelines. Mutation testing provides an alternative or
complementary method of measuring test sufficiency, but has not been widely adopted in the safetycritical industry. The system provides an empirical evaluation of the application of mutation testing to
airborne software systems which have already satisfied the coverage requirements for certification.
The system mutation testing to safety-critical software developed using high-integrity subsets of
C and Ada, identify the most effective mutant types and analyze the root causes of failures in test cases.
Mutation testing could be effective where traditional structural coverage analysis and manual peer
review have failed. They also show that several testing issues have origins beyond the test activity and
this suggests improvements to the requirements definition and coding process. The system also
examines the relationship between program characteristics and mutation survival and considers how
program size can provide a means for targeting test areas most likely to have dormant faults. Industry
feedback is also provided, particularly on how mutation testing can be integrated into a typical
verification life cycle of airborne software. The system also covers the safety and criticality levels of
Java source code.
A Review on Parameter Estimation Techniques of Software Reliability Growth Mo...Editor IJCATR
Software reliability is considered as a quantifiable metric, which is defined as the probability of a software to operate
without failure for a specified period of time in a specific environment. Various software reliability growth models have been proposed
to predict the reliability of a software. These models help vendors to predict the behaviour of the software before shipment. The
reliability is predicted by estimating the parameters of the software reliability growth models. But the model parameters are generally
in nonlinear relationships which creates many problems in finding the optimal parameters using traditional techniques like Maximum
Likelihood and least Square Estimation. Various stochastic search algorithms have been introduced which have made the task of
parameter estimation, more reliable and computationally easier. Parameter estimation of NHPP based reliability models, using MLE
and using an evolutionary search algorithm called Particle Swarm Optimization, has been explored in the paper.
Prioritizing Test Cases for Regression Testing A Model Based ApproachIJTET Journal
Abstract— Testing is an important phase of quality control of Software Development Life Cycle (SDLC). There are various types of testing methodologies involved to test the application. Regression Testing is a type of testing, which is done to ensure whether the modified features or bug fix had an impact over the existing functionality. Defects are identified by executing the set of test cases. Regression Test case selection is not at all possible to conclude how much retesting is required to identify the deviation when the test suites are larger in size. Prioritization of test cases is done to change the order of test case execution based on the severity. In the proposed a model based approach prioritization of test cases are generated based on UML diagrams (Sequence and State Chart). The modified features have the reflection in the model generation and the number of states and transitions covered. Prioritized test cases are then clustered based upon the severities using dendragram approach. It leads to decrease in the time and cost of regression testing.
International Journal of Computational Engineering Research(IJCER) is an intentional online Journal in English monthly publishing journal. This Journal publish original research work that contributes significantly to further the scientific knowledge in engineering and Technology
AN APPROACH FOR TEST CASE PRIORITIZATION BASED UPON VARYING REQUIREMENTS IJCSEA Journal
Software testing is a process continuously performed by the development team during the life cycle of the software with the motive to detect the faults as early as possible. Regressing testing is the most suitable technique for this in which we test number of test cases. As the number of test cases can be very large it is always preferable to prioritize test cases based upon certain criterions.In this paper prioritization strategy is proposed which prioritize test cases based on requirements analysis. By regressing testing if the requirements will vary in future, the software will be modified in such a manner that it will not affect the remaining parts of the software. The proposed system improves the testing process and its efficiency to achieve goals regarding quality, cost, and effort as well user satisfaction and the result of the proposed method evaluated with the help of performance evaluation metric.
One of the core quality assurance feature which combines fault prevention and fault detection, is often known as testability approach also. There are many assessment techniques and quantification method evolved for software testability prediction which actually identifies testability weakness or factors to further help reduce test effort. This paper examines all those measurement techniques that are being proposed for software testability assessment at various phases of object oriented software development life cycle. The aim is to find the best metrics suit for software quality improvisation through software testability support. The ultimate objective is to establish the ground work for finding ways reduce the testing effort by improvising software testability and its assessment using well planned guidelines for object-oriented software development with the help of suitable metrics.
Software Quality Analysis Using Mutation Testing SchemeEditor IJMTER
The software test coverage is used measure the safety measures. The safety critical analysis is
carried out for the source code designed in Java language. Testing provides a primary means for
assuring software in safety-critical systems. To demonstrate, particularly to a certification authority, that
sufficient testing has been performed, it is necessary to achieve the test coverage levels recommended or
mandated by safety standards and industry guidelines. Mutation testing provides an alternative or
complementary method of measuring test sufficiency, but has not been widely adopted in the safetycritical industry. The system provides an empirical evaluation of the application of mutation testing to
airborne software systems which have already satisfied the coverage requirements for certification.
The system mutation testing to safety-critical software developed using high-integrity subsets of
C and Ada, identify the most effective mutant types and analyze the root causes of failures in test cases.
Mutation testing could be effective where traditional structural coverage analysis and manual peer
review have failed. They also show that several testing issues have origins beyond the test activity and
this suggests improvements to the requirements definition and coding process. The system also
examines the relationship between program characteristics and mutation survival and considers how
program size can provide a means for targeting test areas most likely to have dormant faults. Industry
feedback is also provided, particularly on how mutation testing can be integrated into a typical
verification life cycle of airborne software. The system also covers the safety and criticality levels of
Java source code.
A Review on Parameter Estimation Techniques of Software Reliability Growth Mo...Editor IJCATR
Software reliability is considered as a quantifiable metric, which is defined as the probability of a software to operate
without failure for a specified period of time in a specific environment. Various software reliability growth models have been proposed
to predict the reliability of a software. These models help vendors to predict the behaviour of the software before shipment. The
reliability is predicted by estimating the parameters of the software reliability growth models. But the model parameters are generally
in nonlinear relationships which creates many problems in finding the optimal parameters using traditional techniques like Maximum
Likelihood and least Square Estimation. Various stochastic search algorithms have been introduced which have made the task of
parameter estimation, more reliable and computationally easier. Parameter estimation of NHPP based reliability models, using MLE
and using an evolutionary search algorithm called Particle Swarm Optimization, has been explored in the paper.
Prioritizing Test Cases for Regression Testing A Model Based ApproachIJTET Journal
Abstract— Testing is an important phase of quality control of Software Development Life Cycle (SDLC). There are various types of testing methodologies involved to test the application. Regression Testing is a type of testing, which is done to ensure whether the modified features or bug fix had an impact over the existing functionality. Defects are identified by executing the set of test cases. Regression Test case selection is not at all possible to conclude how much retesting is required to identify the deviation when the test suites are larger in size. Prioritization of test cases is done to change the order of test case execution based on the severity. In the proposed a model based approach prioritization of test cases are generated based on UML diagrams (Sequence and State Chart). The modified features have the reflection in the model generation and the number of states and transitions covered. Prioritized test cases are then clustered based upon the severities using dendragram approach. It leads to decrease in the time and cost of regression testing.
The Impact of Software Complexity on Cost and Quality - A Comparative Analysi...ijseajournal
Early prediction of software quality is important for better software planning and controlling. In early
development phases, design complexity metrics are considered as useful indicators of software testing
effort and some quality attributes. Although many studies investigate the relationship between design
complexity and cost and quality, it is unclear what we have learned beyond the scope of individual studies.
This paper presented a systematic review on the influence of software complexity metrics on quality
attributes. We aggregated Spearman correlation coefficients from 59 different data sets from 57 primary
studies by a tailored meta-analysis approach. We found that fault proneness and maintainability are most
frequently investigated attributes. Chidamber & Kemerer metric suite is most frequently used but not all of
them are good quality attribute indicators. Moreover, the impact of these metrics is not different in
proprietary and open source projects. The result provides some implications for building quality model
across project type.
International Journal of Engineering and Science Invention (IJESI) is an international journal intended for professionals and researchers in all fields of computer science and electronics. IJESI publishes research articles and reviews within the whole field Engineering Science and Technology, new teaching methods, assessment, validation and the impact of new technologies and it will continue to provide information on the latest trends and developments in this ever-expanding subject. The publications of papers are selected through double peer reviewed to ensure originality, relevance, and readability. The articles published in our journal can be accessed online.
ANALYSIS OF SOFTWARE QUALITY USING SOFTWARE METRICSijcsa
Software metrics have a direct link with measurement in software engineering. Correct measurement is the prior condition in any engineering fields, and software engineering is not an exception, as the size and complexity of software increases, manual inspection of software becomes a harder task. Most Software Engineers worry about the quality of software, how to measure and enhance its quality. The overall objective of this study was to asses and analysis’s software metrics used to measure the software product and process.
In this Study, the researcher used a collection of literatures from various electronic databases, available since 2008 to understand and know the software metrics. Finally, in this study, the researcher has been identified software quality is a means of measuring how software is designed and how well the software conforms to that design. Some of the variables that we are looking for software quality are Correctness, Product quality, Scalability, Completeness and Absence of bugs, However the quality standard that was used from one organization is different from others for this reason it is better to apply the software metrics to measure the quality of software and the current most common software metrics tools to reduce the subjectivity of faults during the assessment of software quality. The central contribution of this study is an overview about software metrics that can illustrate us the development in this area, and a critical analysis about the main metrics founded on the various literatures.
Software quality is an important issue in the development of successful software application.
Many methods have been applied to improve the software quality. Refactoring is one of those
methods. But, the effect of refactoring in general on all the software quality attributes is
ambiguous.
The goal of this paper is to find out the effect of various refactoring methods on quality
attributes and to classify them based on their measurable effect on particular software quality
attribute. The paper focuses on studying the Reusability, Complexity, Maintainability,
Testability, Adaptability, Understandability, Fault Proneness, Stability and Completeness
attribute of a software .This, in turn, will assist the developer in determining that whether to
apply a certain refactoring method to improve a desirable quality attribute.
Machine Learning approaches are good in solving problems that have less information. In most cases, the
software domain problems characterize as a process of learning that depend on the various circumstances
and changes accordingly. A predictive model is constructed by using machine learning approaches and
classified them into defective and non-defective modules. Machine learning techniques help developers to
retrieve useful information after the classification and enable them to analyse data from different
perspectives. Machine learning techniques are proven to be useful in terms of software bug prediction. This
study used public available data sets of software modules and provides comparative performance analysis
of different machine learning techniques for software bug prediction. Results showed most of the machine
learning methods performed well on software bug datasets.
A DECISION SUPPORT SYSTEM TO CHOOSE OPTIMAL RELEASE CYCLE LENGTH IN INCREMENT...ijseajournal
In the last few years it has been seen that many software vendors have started delivering projects
incrementally with very short release cycles. Best examples of success of this approach has been Ubuntu
Operating system that has a 6 months release cycle and popular web browsers such as Google Chrome,
Opera, Mozilla Firefox. However there is very little knowledge available to the project managers to
validate the chosen release cycle length. We propose a decision support system that helps to validate and
estimate release cycle length in the early development phase by assuming that release cycle length is
directly affected by three factors, (i) choosing right requirements for current cycle, (ii) estimating proximal
time for each requirement, (iii) requirement wise feedback from last iteration based on product reception,
model accuracy and failed requirements. We have altered and used the EVOLVE technique proposed by G.
Ruhe to select best requirements for current cycle and map it to time domain using UCP (Use Case Points)
based estimation and feedback factors. The model has been evaluated on both in-house as well as industry
projects.
One of the fundamental issues in computer science is ordering a list of items. Although there is a number of sorting algorithms, sorting problem has attracted a great deal of research, because efficient sorting is important to optimize the use of other algorithms. This paper presents a new sorting algorithm which runs faster by decreasing the number of comparisons by taking some extra memory. In this algorithm we are using lists to sort the elements. This algorithm was analyzed, implemented and tested and the results are promising for a random data
Cognitive Approach Towards the Maintenance of Web-Sites Through Quality Evalu...Waqas Tariq
It is a well established fact that the Web-Applications require frequent maintenance because of cutting– edge business competitions. The authors have worked on quality evaluation of web-site of Indian ecommerce domain. As a result of that work they have made a quality-wise ranking of these sites. According to their work and also the survey done by various other groups Futurebazaar web-site is considered to be one of the best Indian e-shopping sites. In this research paper the authors are assessing the maintenance of the same site by incorporating the problems incurred during this evaluation. This exercise gives a real world maintainability problem of web-sites. This work will give a clear picture of all the quality metrics which are directly or indirectly related with the maintainability of the web-site.
Trend Analysis of Onboard Calibration Data of Terra/ASTER/VNIR and One of the...Waqas Tariq
Sensitivity degradation trend is analyzed for ASTER: Advanced Spaceborne Thermal Emission and Reflection radiometer/Visible and Near-Infrared Radiometer: VNIR onboard Terra satellite. Fault Tree Analysis is made for sensitivity degradation. Firstly, it is confirmed that the VNIR detectors are stable enough through dark current and shot noise behavior analysis. Then it is also confirmed that radiance of calibration lamp equipped VNIR is stable enough through lamp monitor of photodiode output data analysis. It is confirmed that radiance at the front of VNIR optics is, on the other hand, degraded in conjunction with sensitivity degradation of VNIR through an analysis of another photodiode output data which is equipped at the front of VNIR optics, photodiode output is scale-off at around one year after the launch though. VNIR optics transparency might not be so degraded due to the fact that VNIR output and the later photodiode output show almost same degradations. Consequently, it may say that one of possible causes of VNIR sensitivity degradation is thruster plume.
Generating a Domain Specific Inspection Evaluation Method through an Adaptive...Waqas Tariq
The growth of the Internet and related technologies has enabled the development of a new breed of dynamic websites and applications that are growing rapidly in use and that have had a great impact on many businesses. These websites need to be continuously evaluated and monitored to measure their efficiency and effectiveness, to assess user satisfaction, and ultimately to improve their quality. Nearly all the studies have used Heuristic Evaluation (HE) and User Testing (UT) methodologies, which have become the accepted methods for the usability evaluation of User Interface Design (UID); however, the former is general, and unlikely to encompass all usability attributes for all website domains. The latter is expensive, time consuming and misses consistency problems. To address this need, new evaluation method is developed using traditional evaluations (HE and UT) in novel ways.
The lack of a methodological framework that can be used to generate a domain-specific evaluation method, which can then be used to improve the usability assessment process for a product in any chosen domain, represents a missing area in usability testing. This paper proposes an adapting framework and evaluates it by generating an evaluation method for assessing and improving the usability of a product, called Domain Specific Inspection (DSI), and then analysing it empirically by applying it on the educational domain. Our experiments show that the adaptive framework is able to build a formative and summative evaluation method that provides optimal results with regard to the identification of comprehensive usability problem areas and relevant usability evaluation method (UEM) metrics, with minimum input in terms of the cost and time usually spent on employing UEMs.
AudiNect: An Aid for the Autonomous Navigation of Visually Impaired People, B...Waqas Tariq
In this paper, the realization of a new kind of autonomous navigation aid is presented. The prototype, called AudiNect, is mainly developed as an aid for visually impaired people, though a larger range of applications is also possible. The AudiNect prototype is based on the Kinect device for Xbox 360. On the basis of the Kinect output data, proper acoustic feedback is generated, so that useful depth information from 3D frontal scene can be easily developed and acquired. To this purpose, a number of basic problems have been analyzed, in relation to visually impaired people orientation and movement, through both actual experimentations and a careful literature research in the field. Quite satisfactory results have been reached and discussed, on the basis of proper tests on blindfolded sighted individuals.
The Impact of Software Complexity on Cost and Quality - A Comparative Analysi...ijseajournal
Early prediction of software quality is important for better software planning and controlling. In early
development phases, design complexity metrics are considered as useful indicators of software testing
effort and some quality attributes. Although many studies investigate the relationship between design
complexity and cost and quality, it is unclear what we have learned beyond the scope of individual studies.
This paper presented a systematic review on the influence of software complexity metrics on quality
attributes. We aggregated Spearman correlation coefficients from 59 different data sets from 57 primary
studies by a tailored meta-analysis approach. We found that fault proneness and maintainability are most
frequently investigated attributes. Chidamber & Kemerer metric suite is most frequently used but not all of
them are good quality attribute indicators. Moreover, the impact of these metrics is not different in
proprietary and open source projects. The result provides some implications for building quality model
across project type.
International Journal of Engineering and Science Invention (IJESI) is an international journal intended for professionals and researchers in all fields of computer science and electronics. IJESI publishes research articles and reviews within the whole field Engineering Science and Technology, new teaching methods, assessment, validation and the impact of new technologies and it will continue to provide information on the latest trends and developments in this ever-expanding subject. The publications of papers are selected through double peer reviewed to ensure originality, relevance, and readability. The articles published in our journal can be accessed online.
ANALYSIS OF SOFTWARE QUALITY USING SOFTWARE METRICSijcsa
Software metrics have a direct link with measurement in software engineering. Correct measurement is the prior condition in any engineering fields, and software engineering is not an exception, as the size and complexity of software increases, manual inspection of software becomes a harder task. Most Software Engineers worry about the quality of software, how to measure and enhance its quality. The overall objective of this study was to asses and analysis’s software metrics used to measure the software product and process.
In this Study, the researcher used a collection of literatures from various electronic databases, available since 2008 to understand and know the software metrics. Finally, in this study, the researcher has been identified software quality is a means of measuring how software is designed and how well the software conforms to that design. Some of the variables that we are looking for software quality are Correctness, Product quality, Scalability, Completeness and Absence of bugs, However the quality standard that was used from one organization is different from others for this reason it is better to apply the software metrics to measure the quality of software and the current most common software metrics tools to reduce the subjectivity of faults during the assessment of software quality. The central contribution of this study is an overview about software metrics that can illustrate us the development in this area, and a critical analysis about the main metrics founded on the various literatures.
Software quality is an important issue in the development of successful software application.
Many methods have been applied to improve the software quality. Refactoring is one of those
methods. But, the effect of refactoring in general on all the software quality attributes is
ambiguous.
The goal of this paper is to find out the effect of various refactoring methods on quality
attributes and to classify them based on their measurable effect on particular software quality
attribute. The paper focuses on studying the Reusability, Complexity, Maintainability,
Testability, Adaptability, Understandability, Fault Proneness, Stability and Completeness
attribute of a software .This, in turn, will assist the developer in determining that whether to
apply a certain refactoring method to improve a desirable quality attribute.
Machine Learning approaches are good in solving problems that have less information. In most cases, the
software domain problems characterize as a process of learning that depend on the various circumstances
and changes accordingly. A predictive model is constructed by using machine learning approaches and
classified them into defective and non-defective modules. Machine learning techniques help developers to
retrieve useful information after the classification and enable them to analyse data from different
perspectives. Machine learning techniques are proven to be useful in terms of software bug prediction. This
study used public available data sets of software modules and provides comparative performance analysis
of different machine learning techniques for software bug prediction. Results showed most of the machine
learning methods performed well on software bug datasets.
A DECISION SUPPORT SYSTEM TO CHOOSE OPTIMAL RELEASE CYCLE LENGTH IN INCREMENT...ijseajournal
In the last few years it has been seen that many software vendors have started delivering projects
incrementally with very short release cycles. Best examples of success of this approach has been Ubuntu
Operating system that has a 6 months release cycle and popular web browsers such as Google Chrome,
Opera, Mozilla Firefox. However there is very little knowledge available to the project managers to
validate the chosen release cycle length. We propose a decision support system that helps to validate and
estimate release cycle length in the early development phase by assuming that release cycle length is
directly affected by three factors, (i) choosing right requirements for current cycle, (ii) estimating proximal
time for each requirement, (iii) requirement wise feedback from last iteration based on product reception,
model accuracy and failed requirements. We have altered and used the EVOLVE technique proposed by G.
Ruhe to select best requirements for current cycle and map it to time domain using UCP (Use Case Points)
based estimation and feedback factors. The model has been evaluated on both in-house as well as industry
projects.
One of the fundamental issues in computer science is ordering a list of items. Although there is a number of sorting algorithms, sorting problem has attracted a great deal of research, because efficient sorting is important to optimize the use of other algorithms. This paper presents a new sorting algorithm which runs faster by decreasing the number of comparisons by taking some extra memory. In this algorithm we are using lists to sort the elements. This algorithm was analyzed, implemented and tested and the results are promising for a random data
Cognitive Approach Towards the Maintenance of Web-Sites Through Quality Evalu...Waqas Tariq
It is a well established fact that the Web-Applications require frequent maintenance because of cutting– edge business competitions. The authors have worked on quality evaluation of web-site of Indian ecommerce domain. As a result of that work they have made a quality-wise ranking of these sites. According to their work and also the survey done by various other groups Futurebazaar web-site is considered to be one of the best Indian e-shopping sites. In this research paper the authors are assessing the maintenance of the same site by incorporating the problems incurred during this evaluation. This exercise gives a real world maintainability problem of web-sites. This work will give a clear picture of all the quality metrics which are directly or indirectly related with the maintainability of the web-site.
Trend Analysis of Onboard Calibration Data of Terra/ASTER/VNIR and One of the...Waqas Tariq
Sensitivity degradation trend is analyzed for ASTER: Advanced Spaceborne Thermal Emission and Reflection radiometer/Visible and Near-Infrared Radiometer: VNIR onboard Terra satellite. Fault Tree Analysis is made for sensitivity degradation. Firstly, it is confirmed that the VNIR detectors are stable enough through dark current and shot noise behavior analysis. Then it is also confirmed that radiance of calibration lamp equipped VNIR is stable enough through lamp monitor of photodiode output data analysis. It is confirmed that radiance at the front of VNIR optics is, on the other hand, degraded in conjunction with sensitivity degradation of VNIR through an analysis of another photodiode output data which is equipped at the front of VNIR optics, photodiode output is scale-off at around one year after the launch though. VNIR optics transparency might not be so degraded due to the fact that VNIR output and the later photodiode output show almost same degradations. Consequently, it may say that one of possible causes of VNIR sensitivity degradation is thruster plume.
Generating a Domain Specific Inspection Evaluation Method through an Adaptive...Waqas Tariq
The growth of the Internet and related technologies has enabled the development of a new breed of dynamic websites and applications that are growing rapidly in use and that have had a great impact on many businesses. These websites need to be continuously evaluated and monitored to measure their efficiency and effectiveness, to assess user satisfaction, and ultimately to improve their quality. Nearly all the studies have used Heuristic Evaluation (HE) and User Testing (UT) methodologies, which have become the accepted methods for the usability evaluation of User Interface Design (UID); however, the former is general, and unlikely to encompass all usability attributes for all website domains. The latter is expensive, time consuming and misses consistency problems. To address this need, new evaluation method is developed using traditional evaluations (HE and UT) in novel ways.
The lack of a methodological framework that can be used to generate a domain-specific evaluation method, which can then be used to improve the usability assessment process for a product in any chosen domain, represents a missing area in usability testing. This paper proposes an adapting framework and evaluates it by generating an evaluation method for assessing and improving the usability of a product, called Domain Specific Inspection (DSI), and then analysing it empirically by applying it on the educational domain. Our experiments show that the adaptive framework is able to build a formative and summative evaluation method that provides optimal results with regard to the identification of comprehensive usability problem areas and relevant usability evaluation method (UEM) metrics, with minimum input in terms of the cost and time usually spent on employing UEMs.
AudiNect: An Aid for the Autonomous Navigation of Visually Impaired People, B...Waqas Tariq
In this paper, the realization of a new kind of autonomous navigation aid is presented. The prototype, called AudiNect, is mainly developed as an aid for visually impaired people, though a larger range of applications is also possible. The AudiNect prototype is based on the Kinect device for Xbox 360. On the basis of the Kinect output data, proper acoustic feedback is generated, so that useful depth information from 3D frontal scene can be easily developed and acquired. To this purpose, a number of basic problems have been analyzed, in relation to visually impaired people orientation and movement, through both actual experimentations and a careful literature research in the field. Quite satisfactory results have been reached and discussed, on the basis of proper tests on blindfolded sighted individuals.
Principles of Good Screen Design in WebsitesWaqas Tariq
Visual techniques for proper arrangement of the elements on the user screen have helped the designers to make the screen look good and attractive. Several visual techniques emphasize the arrangement and ordering of the screen elements based on particular criteria for best appearance of the screen. This paper investigates few significant visual techniques in various web user interfaces and showcases the results for better understanding and their presence.
A Method for Red Tide Detection and Discrimination of Red Tide Type (spherica...Waqas Tariq
A method for red tide detection and discrimination of red tide type (spherical and non-spherical shapes of red tide type) through polarization measurements of sea surface is proposed. There are a variety of shapes of red tide types, spherical and non-spherical. Polarization characteristics of such different shapes of red tide type are different so that discrimination can be done through polarization measurement of sea surface. Through laboratory based experiments with chattonella antiqua containing water and just water as well as chattonella marina and chattonella globossa containing water, it is confirmed that the proposed method is valid in laboratory basis. Also field experimental results, which are conducted at Ariaki Sea in Kyushu, Japan, show that the proposed method is valid.
Detecting Diagonal Activity to Quantify Harmonic Structure Preservation With ...Waqas Tariq
Matrix multiplication is widely utilized in signal and image processing. In numerous cases, it may be considered faster than conventional algorithms. Images and sounds may be presented in a multi-dimensional matrix form. An application under study is detecting diagonal activities in matrices to quantifying the amount of harmonic structure preservation of musical tones using different algorithms may be employed in cochlear implant devices. In this paper, a new matrix is proposed that is when post multiplied with another matrix; the first row of the output represents indices of fully active detected diagonals in its upper triangle. A preprocessing matrix manipulation was be mandatory. The results show that Omran matrix is powerful in this application and illustrated higher performance of one of the utilized algorithms with respect to others.
A Novel Approach Concerning Wind Power EnhancementWaqas Tariq
Being a tropical country, Bangladesh does have wind flow throughout the year. However, the prospect for wind energy in Bangladesh is not at satisfactory level due to low average wind velocities at different regions of the country. The field survey data indicated that the wind velocities are relatively higher from the month of May to August, whereas, it is not so for the rest of the year. Therefore, exploiting the wind energy at low wind velocities is a major predicament in creating a sustainable energy resource for a country with inauspicious forthcoming energy crisis. The scope of this paper concentrates on an innovative approach to harness wind power by installing an auxiliary unit which would only assist the primary turbine unit in case the wind velocity falls under the required value. The auxiliary unit would comprise a secondary turbine, which would be operated by a DC motor connected to a battery system that is charged by a solar panel. A specially designed conduit would encompass both the primary and auxiliary turbine units. A CFD simulation utilizing ANSYS FLOTRAN was carried out to investigate the velocity profiles for different pressure differences at different regions of the prototype conduit. A feasibility analysis of the modified system was eventually carried out for the preferred conduit design.
Learning of Soccer Player Agents Using a Policy Gradient Method : Coordinatio...Waqas Tariq
As an example of multi-agent learning in soccer games of the RoboCup 2D Soccer Simulation League, we dealt with a learning problem between a kicker and a receiver when a direct free kick is awarded just outside the opponent's penalty area. We propose how to use a heuristic function to evaluate an advantageous target point for safely sending/receiving a pass and scoring. The heuristics include an interaction term between a kicker and a receiver to intensify their coordination. To calculate the interaction term, we let a kicker/receiver agent have a receiver's/kicker's action decision model to predict a receiver's/kicker's action. Parameters in the heuristic function can be learned by a kind of reinforcement learning called the policy gradient method. Our experiments show that if the two agents do not have the same type of heuristics, the interaction term based on prediction of a teammate's decision model leads to learning a master-servant relation between a kicker and a receiver, where a receiver is a master and a kicker is a servant.
Identifying the Factors Affecting Users’ Adoption of Social NetworkingWaqas Tariq
Through the rapid expansion of information and communication technologies, social networking sites have received much more attention in the scope of internet communication. Success of a social web primarily depends on users’ satisfaction. In this context, this study aims to identify the influencing factors that affect users’ satisfaction towards social networking site use. A multidimensional model has been proposed based on the Information Quality, System Quality, Environmental and Affective dimensions to assess the effects of key variables – Semantic Intention, Usability, Web-Page Aesthetics, Subjective Norm and Trust- on users’ satisfaction. Facebook was chosen as a focused social networking site, because of its popularity. A comprehensive survey instrument was applied to 203 Facebook users. Also, Structural Equation Modeling, particularly Partial Least Square, was conducted to analyze the proposed research model. As a result, proposed multidimensional research model predicts the factors influencing users’ satisfaction towards social networking site use and relationships among these factors. The findings of this research will be valuable for literature by analyzing the influencing factors that have not been previously researched in the context of social networking satisfaction area.
Toward a More Robust Usability concept with Perceived Enjoyment in the contex...Waqas Tariq
Mobile multimedia service is relatively new but has quickly dominated people¡¯s lives, especially among young people. To explain this popularity, this study applies and modifies the Technology Acceptance Model (TAM) to propose a research model and conduct an empirical study. The goal of study is to examine the role of Perceived Enjoyment (PE) and what determinants can contribute to PE in the context of using mobile multimedia service. The result indicates that PE is influencing on Perceived Usefulness (PU) and Perceived Ease of Use (PEOU) and directly Behavior Intention (BI). Aesthetics and flow are key determinants to explain Perceived Enjoyment (PE) in mobile multimedia usage.
Exploring the Relationship Between Mobile Phone and Senior Citizens: A Malays...Waqas Tariq
There is growing ageing phenomena with the rise of ageing population throughout the world. According to the World Health Organization (2002), the growing ageing population indicates 694 million, or 223% is expected for people aged 60 and over, since 1970 and 2025.The growth is especially significant in some advanced countries such as North America, Japan, Italy, Germany, United Kingdom and so forth. This growing older adult population has significantly impact the social-culture, lifestyle, healthcare system, economy, infrastructure and government policy of a nation. However, there are limited research studies on the perception and usage of a mobile phone and its service for senior citizens in a developing nation like Malaysia. This paper explores the relationship between mobile phones and senior citizens in Malaysia from the perspective of a developing country. We conducted an exploratory study using contextual interviews with 5 senior citizens of how they perceive their mobile phones. This paper reveals 4 interesting themes from this preliminary study, in addition to the findings of the desirable mobile requirements for local senior citizens with respect of health, safety and communication purposes. The findings of this study bring interesting insight to local telecommunication industries as a whole, and will also serve as groundwork for more in-depth study in the future.
Usage of Autonomy Features in USAR Human-Robot TeamsWaqas Tariq
This paper presents the results of a high-fidelity urban search and rescue (USAR) simulation at a firefighting training site. The NIFTi was system used, which consisted of a semi-autonomous ground robot, a remote-controlled flying robot, a multiview multimodal operator control unit (OCU), and a tactical-level system for mission planning. From a remote command post, firefighters could interact with the robots through the OCU and with a rescue team in person and via radio. They participated in 40-minute reconnaissance missions and showed that highly autonomous features are not easily accepted in the socio-technological context. In fact, the operators drove three times more manually than with any level of autonomy.The paper identifies several factors, such reliability, trust, and transparency that require improvement if end-users are to delegate control to the robots, irrespective of how capable the robots are in such missions.
Protocol Type Based Intrusion Detection Using RBF Neural NetworkWaqas Tariq
Intrusion detection systems (IDSs) are very important tools for providing information and computer security. In IDSs, the publicly available KDD’99, has been the most widely deployed data set used by researchers since 1999. Using a common data set has been provided to compare the results of different researches. The aim of this study is to find optimal methods of preprocessing the KDD’99 data set and employ the RBF learning algorithm to apply an Intrusion Detection System.
The Reasons social media contributed to 2011 Egyptian RevolutionWaqas Tariq
In recent years, social media has become very significant for social networking. In the past, its main use was personal, but nowadays, its becoming part of all facets of our lives, social and political. In the first quarter of 2011, the Middle East has witnessed many popular uprisings that have yet to reach an end. While these uprisings have often been termed “Facebook Revolutions” or “Twitter Revolutions”, there are many ambiguities as to the extent to which social media affected these movements. In this paper we discuss the role of social media and its impact on the 2011 Egyptian revolution. Though the reasons for the uprising were manifold, we will focus on how social media facilitated and accelerated the movement.
Evaluation of Students’ Working Postures in School WorkshopWaqas Tariq
Awkward postures are one of the major causes of musculoskeletal problems to be prevented at an early stage. Tackling this problem at the initial stage in schools would be of great importance. Tasks should be designed to avoid strain and damage to any part of the body such as the tendons, muscles, ligaments, and especially the back. Musculoskeletal disorder and back pain problems in adults was partly contributed by having such symptoms in their childhood. It is important to understand the symptoms of low back pain in children and design early interventions to prevent chronic symptoms that they may experience when they are adults. Musculoskeletal disorder and back pain problems in children and adolescent may give great implications in future workforce. The objective of this study was to compare working postures among students 13 to 15 years old while performing tasks in school workshop, therefore problems of musculoskeletal pain among students can be identified. Ergonomic assessments used for this study were the RULA and REBA methods. This cross-sectional study was conducted at a secondary school in Malaysia. Ninety-three working postures were evaluated to find out the posture risk level. Analysis result showed the average score are 4.87 and 5.87 for RULA and REBA methods respectively, which indicate medium risk and need for further action. The results also informed that 13-year old students had higher scores for both methods. Comparison using Kruskal-Wallis rank test showed there were significant differences among age groups for both scores and action levels. 13-year old students have the highest mean rank indicating bigger potential risks of awkward postures. In conclusion, both methods proved the workstation is mismatched for students’ body size especially for younger students. An ergonomic intervention is needed to improve students’ working posture, work performance and level of comfort.
A Simplified Model for Evaluating Software Reliability at the Developmental S...Waqas Tariq
The use of open source software is becoming more and more predominant and it is important that the reliability of this software are evaluated. Even though a lot of researchers have tried to establish the failure pattern of different packages a deterministic model for evaluating reliability is not yet developed. The present work details a simplified model for evaluating the reliability of the open source software based on the available failure data. The methodology involves identifying a fixed number of packages at the start of the time and defining the failure rate based on the failure data for these preset number of packages. The defined function of the failure rate is used to arrive at the reliability model. The reliability values obtained using the developed model are also compared with the exact reliability values. Key words: Bugs, Failure density, Failure rate, Open source software, Reliability
Telecardiology and Teletreatment System Design for Heart Failures Using Type-...Waqas Tariq
Proper diagnosis of heart failures is critical, since the appropriate treatments are strongly dependent upon the underlying cause. Furthermore, rapid diagnosis is also critical, since the effectiveness of some treatments depends upon rapid initiation. In this paper, a new web-based telecardiology system has been proposed for diagnosis, consultation, and treatment. The aim of this implemented telecardiology system is to help to practitioner doctor, if clinic findings of patient misgive heart failures. This model consists of three subsystems. The first subsystem divides into recording and preprocessing phase. Here, electrocardiography signal is recorded from emergency patient and this recorded signal is preprocessed for detection of RR interval. The second subsystem realizes classification of RR interval. In other words, this second subsystem is to diagnosis heart failures. In this study, a combined classification system has been designed using type-2 fuzzy c-means clustering (T2FCM) algorithm and neural networks. T2FCM was used to improve performance of neural networks which was obtained very high performance accuracy to classify RR intervals of ECG signals. This proposed automated telecardiology and diagnostic system assists to practitioner doctor to diagnosis heart failures easily. Training and testing data for this diagnostic system are included five ECG signal classes. The third subsystem is consultation and teletreatment between practitioner (or family) doctor and cardiologist worked in research hospital with prepared web page (www.telekardiyoloji.com). However, opportunity of signal’s evaluation is presented to practitioner and expert doctor with prepared interfaces. T2FCM is applied to the training data for the selection of best segments in the second subsystem. A new training set formed by these best segments was classified using the neural networks classifier which has backpropagation well-known algorithm and generalized delta rule learning. Recognition accuracy rate was found as 99% using proposed Type-2 Fuzzy Clustering Neural Networks (T2FCNN) method.
A survey of predicting software reliability using machine learning methodsIAESIJAI
In light of technical and technological progress, software has become an urgent need in every aspect of human life, including the medicine sector and industrial control. Therefore, it is imperative that the software always works flawlessly. The information technology sector has witnessed a rapid expansion in recent years, as software companies can no longer rely only on cost advantages to stay competitive in the market, but programmers must provide reliable and high-quality software, and in order to estimate and predict software reliability using machine learning and deep learning, it was introduced A brief overview of the important scientific contributions to the subject of software reliability, and the researchers' findings of highly efficient methods and techniques for predicting software reliability.
SRGM Analyzers Tool of SDLC for Software Improving QualityIJERA Editor
Software Reliability Growth Models (SRGM) have been developed to estimate software reliability measures such as
software failure rate, number of remaining faults and software reliability. In this paper, the software analyzers tool proposed
for deriving several software reliability growth models based on Enhanced Non-homogeneous Poisson Process (ENHPP) in
the presence of imperfect debugging and error generation. The proposed models are initially formulated for the case when
there is no differentiation between failure observation and fault removal testing processes and then this extended for the case
when there is a clear differentiation between failure observation and fault removal testing processes. Many Software
Reliability Growth Models (SRGM) have been developed to describe software failures as a random process and can be used
to measure the development status during testing. With SRGM software consultants can easily measure (or evaluate) the
software reliability (or quality) and plot software reliability growth charts.
A Complexity Based Regression Test Selection StrategyCSEIJJournal
Software is unequivocally the foremost and indispensable entity in this technologically driven world.
Therefore quality assurance, and in particular, software testing is a crucial step in the software
development cycle. This paper presents an effective test selection strategy that uses a Spectrum of
Complexity Metrics (SCM). Our aim in this paper is to increase the efficiency of the testing process by
significantly reducing the number of test cases without having a significant drop in test effectiveness. The
strategy makes use of a comprehensive taxonomy of complexity metrics based on the product level (class,
method, statement) and its characteristics.We use a series of experiments based on three applications with
a significant number of mutants to demonstrate the effectiveness of our selection strategy.For further
evaluation, we compareour approach to boundary value analysis. The results show the capability of our
approach to detect mutants as well as the seeded errors.
Contributors to Reduce Maintainability Cost at the Software Implementation PhaseWaqas Tariq
Software maintenance is important and difficult to measure. The cost of maintenance is the most ever during the phases of software development. One of the most critical processes in software development is the reduction of software maintainability cost based on the quality of source code during design step, however, a lack of quality models and measures can help asses the quality attributes of software maintainability process. Software maintainability suffers from a number of challenges such as lack source code understanding, quality of software code, and adherence to programming standards in maintenance. This work describes model based-factors to assess the software maintenance, explains the steps followed to obtain and validate them. Such a method can be used to eliminate the software maintenance cost. The research results will enhance the quality of the source code. It will increase software understandability, eliminate maintenance time, cost, and give confidence for software reusability.
From previous year researches, it is concluded that testing is playing a vital role in the development of the software product. As, software testing is a single approach to assure the quality of the software so most of the development efforts are put on the software testing. But software testing is an expensive process and consumes a lot of time. So, testing should be start as early as possible in the development to control the money and time problems. Even, testing should be performed at every step in the software development life cycle (SDLC) which is a structured approach used in the development of the software product. Software testing is a tradeoff between budget, time and quality. Now a day, testing becomes a very important activity in terms of exposure, security, performance and usability. Hence, software testing faces a collection of challenges.
FROM THE ART OF SOFTWARE TESTING TO TEST-AS-A-SERVICE IN CLOUD COMPUTINGijseajournal
Researchers consider that the first edition of the book "The Art of Software Testing" by Myers (1979)
initiated research in Software Testing. Since then, software testing has gone through evolutions that have
driven standards and tools. This evolution has accompanied the complexity and variety of software
deployment platforms. The migration to the cloud allowed benefits such as scalability, agility, and better
return on investment. Cloud computing requires more significant involvement in software testing to ensure
that services work as expected. In addition to testing cloud applications, cloud computing has paved the
way for testing in the Test-as-a-Service model. This review aims to understand software testing in the
context of cloud computing. Based on the knowledge explained here, we sought to linearize the evolution of
software testing, characterizing fundamental points and allowing us to compose a synthesis of the body of
knowledge in software testing, expanded by the cloud computing paradigm.
From the Art of Software Testing to Test-as-a-Service in Cloud Computingijseajournal
Researchers consider that the first edition of the book "The Art of Software Testing" by Myers (1979)
initiated research in Software Testing. Since then, software testing has gone through evolutions that have
driven standards and tools. This evolution has accompanied the complexity and variety of software
deployment platforms. The migration to the cloud allowed benefits such as scalability, agility, and better
return on investment. Cloud computing requires more significant involvement in software testing to ensure
that services work as expected. In addition to testing cloud applications, cloud computing has paved the
way for testing in the Test-as-a-Service model. This review aims to understand software testing in the
context of cloud computing. Based on the knowledge explained here, we sought to linearize the evolution of
software testing, characterizing fundamental points and allowing us to compose a synthesis of the body of
knowledge in software testing, expanded by the cloud computing paradigm.
Software Quality Engineering is a broad area that is concerned with various approaches to improve software quality. A quality model would prove successful when it suffices the requirements of the developers and the consumers. This research focuses on establishing semantics between the existing techniques related to the software quality engineering and thereby designing a framework for rating software quality.
IJCER (www.ijceronline.com) International Journal of computational Engineerin...ijceronline
Call for paper 2012, hard copy of Certificate, research paper publishing, where to publish research paper,
journal publishing, how to publish research paper, Call For research paper, international journal, publishing a paper, IJCER, journal of science and technology, how to get a research paper published, publishing a paper, publishing of journal, publishing of research paper, research and review articles, IJCER Journal, How to publish your research paper, publish research paper, open access engineering journal, Engineering journal, Mathematics journal, Physics journal, Chemistry journal, Computer Engineering, Computer Science journal, how to submit your paper, peer review journal, indexed journal, research and review articles, engineering journal, www.ijceronline.com, research journals,
yahoo journals, bing journals, International Journal of Computational Engineering Research, Google journals, hard copy of Certificate,
journal of engineering, online Submission
A new model for software costestimationijfcstjournal
Accurate and realistic estimation is always considered to be a great challenge in software industry.
Software Cost Estimation (SCE) is the standard application used to manage software projects. Determining
the amount of estimation in the initial stages of the project depends on planning other activities of the
project. In fact, the estimation is confronted with a number of uncertainties and barriers’, yet assessing the
previous projects is essential to solve this problem. Several models have been developed for the analysis of
software projects. But the classical reference method is the COCOMO model, there are other methods
which are also applied such as Function Point (FP), Line of Code(LOC); meanwhile, the expert`s opinions
matter in this regard. In recent years, the growth and the combination of meta-heuristic algorithms with
high accuracy have brought about a great achievement in software engineering. Meta-heuristic algorithms
which can analyze data from multiple dimensions and identify the optimum solution between them are
analytical tools for the analysis of data. In this paper, we have used the Harmony Search (HS)algorithm for
SCE. The proposed model which is a collection of 60 standard projects from Dataset NASA60 has been
assessed.The experimental results show that HS algorithm is a good way for determining the weight
similarity measures factors of software effort, and reducing the error of MRE.
A NEW MODEL FOR SOFTWARE COSTESTIMATION USING HARMONY SEARCHijfcstjournal
Accurate and realistic estimation is always considered to be a great challenge in software industry.
Software Cost Estimation (SCE) is the standard application used to manage software projects. Determining
the amount of estimation in the initial stages of the project depends on planning other activities of the
project. In fact, the estimation is confronted with a number of uncertainties and barriers’, yet assessing the
previous projects is essential to solve this problem. Several models have been developed for the analysis of
software projects. But the classical reference method is the COCOMO model, there are other methods
which are also applied such as Function Point (FP), Line of Code(LOC); meanwhile, the expert`s opinions
matter in this regard. In recent years, the growth and the combination of meta-heuristic algorithms with
high accuracy have brought about a great achievement in software engineering. Meta-heuristic algorithms
which can analyze data from multiple dimensions and identify the optimum solution between them are
analytical tools for the analysis of data. In this paper, we have used the Harmony Search (HS)algorithm for
SCE. The proposed model which is a collection of 60 standard projects from Dataset NASA60 has been
assessed.The experimental results show that HS algorithm is a good way for determining the weight
similarity measures factors of software effort, and reducing the error of MRE.
A Review on Software Fault Detection and Prevention Mechanism in Software Dev...iosrjce
IOSR Journal of Computer Engineering (IOSR-JCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of computer engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in computer technology. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
APPLYING REQUIREMENT BASED COMPLEXITY FOR THE ESTIMATION OF SOFTWARE DEVELOPM...cscpconf
The need of computing the software complexity in requirement analysis phase of software
development life cycle (SDLC) would be an enormous benefit for estimating the required
development and testing effort for yet to be developed software. Also, a relationship between
source code and difficulty in developing a source code are also attempted in order to estimate the
complexity of the proposed software for cost estimation, man power build up, code and
developer’s evaluation. Therefore, this paper presents a systematic and an integrated approach
for the estimation of software development and testing effort on the basis of improved
requirement based complexity (IRBC) of the proposed software. The IRBC measure serves as the
basis for estimation of these software development activities to enable the developers and
practitioners to predict the critical information about the software development intricacies and
obtained from software requirement specification (SRS) of proposed software. Hence, this paper
presents an integrated approach, for the prediction of software development and testing effort
using IRBC. For validation purpose, the proposed measures are categorically compared with
various established and prevalent practices proposed in the past. Finally, the results obtained, validates the claim, for the approaches discussed in this paper, for estimation of software development and testing effort, in the early phases of SDLC appears to be robust, comprehensive, early alarming and compares well with other measures proposed in the past.
The Use of Java Swing’s Components to Develop a WidgetWaqas Tariq
Widget is a kind of application provides a single service such as a map, news feed, simple clock, battery-life indicators, etc. This kind of interactive software object has been developed to facilitate user interface (UI) design. A user interface (UI) function may be implemented using different widgets with the same function. In this article, we present the widget as a platform that is generally used in various applications, such as in desktop, web browser, and mobile phone. We also describe a visual menu of Java Swing’s components that will be used to establish widget. It will assume that we have successfully compiled and run a program that uses Swing components.
3D Human Hand Posture Reconstruction Using a Single 2D ImageWaqas Tariq
Passive sensing of the 3D geometric posture of the human hand has been studied extensively over the past decade. However, these research efforts have been hampered by the computational complexity caused by inverse kinematics and 3D reconstruction. In this paper, our objective focuses on 3D hand posture estimation based on a single 2D image with aim of robotic applications. We introduce the human hand model with 27 degrees of freedom (DOFs) and analyze some of its constraints to reduce the DOFs without any significant degradation of performance. A novel algorithm to estimate the 3D hand posture from eight 2D projected feature points is proposed. Experimental results using real images confirm that our algorithm gives good estimates of the 3D hand pose. Keywords: 3D hand posture estimation; Model-based approach; Gesture recognition; human- computer interface; machine vision.
Camera as Mouse and Keyboard for Handicap Person with Troubleshooting Ability...Waqas Tariq
Camera mouse has been widely used for handicap person to interact with computer. The utmost important of the use of camera mouse is must be able to replace all roles of typical mouse and keyboard. It must be able to provide all mouse click events and keyboard functions (include all shortcut keys) when it is used by handicap person. Also, the use of camera mouse must allow users troubleshooting by themselves. Moreover, it must be able to eliminate neck fatigue effect when it is used during long period. In this paper, we propose camera mouse system with timer as left click event and blinking as right click event. Also, we modify original screen keyboard layout by add two additional buttons (button “drag/ drop” is used to do drag and drop of mouse events and another button is used to call task manager (for troubleshooting)) and change behavior of CTRL, ALT, SHIFT, and CAPS LOCK keys in order to provide shortcut keys of keyboard. Also, we develop recovery method which allows users go from camera and then come back again in order to eliminate neck fatigue effect. The experiments which involve several users have been done in our laboratory. The results show that the use of our camera mouse able to allow users do typing, left and right click events, drag and drop events, and troubleshooting without hand. By implement this system, handicap person can use computer more comfortable and reduce the dryness of eyes.
A Proposed Web Accessibility Framework for the Arab DisabledWaqas Tariq
The Web is providing unprecedented access to information and interaction for people with disabilities. This paper presents a Web accessibility framework which offers the ease of the Web accessing for the disabled Arab users and facilitates their lifelong learning as well. The proposed framework system provides the disabled Arab user with an easy means of access using their mother language so they don’t have to overcome the barrier of learning the target-spoken language. This framework is based on analyzing the web page meta-language, extracting its content and reformulating it in a suitable format for the disabled users. The basic objective of this framework is supporting the equal rights of the Arab disabled people for their access to the education and training with non disabled people. Key Words : Arabic Moon code, Arabic Sign Language, Deaf, Deaf-blind, E-learning Interactivity, Moon code, Web accessibility , Web framework , Web System, WWW.
Real Time Blinking Detection Based on Gabor FilterWaqas Tariq
New method of blinking detection is proposed. The utmost important of blinking detections method is robust against different users, noise, and also change of eye shape. In this paper, we propose blinking detections method by measuring of distance between two arcs of eye (upper part and lower part). We detect eye arcs by apply Gabor filter onto eye image. As we know that Gabor filter has advantage on image processing application since it able to extract spatial localized spectral features, such line, arch, and other shape are more easily detected. After two of eye arcs are detected, we measure the distance between both by using connected labeling method. The open eye is marked by the distance between two arcs is more than threshold and otherwise, the closed eye is marked by the distance less than threshold. The experiment result shows that our proposed method robust enough against different users, noise, and eye shape changes with perfectly accuracy.
Computer Input with Human Eyes-Only Using Two Purkinje Images Which Works in ...Waqas Tariq
A method for computer input with human eyes-only using two Purkinje images which works in a real time basis without calibration is proposed. Experimental results shows that cornea curvature can be estimated by using two light sources derived Purkinje images so that no calibration for reducing person-to-person difference of cornea curvature. It is found that the proposed system allows usersf movements of 30 degrees in roll direction and 15 degrees in pitch direction utilizing detected face attitude which is derived from the face plane consisting three feature points on the face, two eyes and nose or mouth. Also it is found that the proposed system does work in a real time basis.
Collaborative Learning of Organisational KnolwedgeWaqas Tariq
This paper presents recent research into methods used in Australian Indigenous Knowledge sharing and looks at how these can support the creation of suitable collaborative envi- ronments for timely organisational learning. The protocols and practices as used today and in the past by Indigenous communities are presented and discussed in relation to their relevance to a personalised system of knowledge sharing in modern organisational cultures. This research focuses on user models, knowledge acquisition and integration of data for constructivist learning in a networked repository of or- ganisational knowledge. The data collected in the repository is searched to provide collections of up-to-date and relevant material for training in a work environment. The aim is to improve knowledge collection and sharing in a team envi- ronment. This knowledge can then be collated into a story or workflow that represents the present knowledge in the organisation.
Our research aims to propose a global approach for specification, design and verification of context awareness Human Computer Interface (HCI). This is a Model Based Design approach (MBD). This methodology describes the ubiquitous environment by ontologies. OWL is the standard used for this purpose. The specification and modeling of Human-Computer Interaction are based on Petri nets (PN). This raises the question of representation of Petri nets with XML. We use for this purpose, the standard of modeling PNML. In this paper, we propose an extension of this standard for specification, generation and verification of HCI. This extension is a methodological approach for the construction of PNML with Petri nets. The design principle uses the concept of composition of elementary structures of Petri nets as PNML Modular. The objective is to obtain a valid interface through verification of properties of elementary Petri nets represented with PNML.
Development of Sign Signal Translation System Based on Altera’s FPGA DE2 BoardWaqas Tariq
The main aim of this paper is to build a system that is capable of detecting and recognizing the hand gesture in an image captured by using a camera. The system is built based on Altera’s FPGA DE2 board, which contains a Nios II soft core processor. Image processing techniques and a simple but effective algorithm are implemented to achieve this purpose. Image processing techniques are used to smooth the image in order to ease the subsequent processes in translating the hand sign signal. The algorithm is built for translating the numerical hand sign signal and the result are displayed on the seven segment display. Altera’s Quartus II, SOPC Builder and Nios II EDS software are used to construct the system. By using SOPC Builder, the related components on the DE2 board can be interconnected easily and orderly compared to traditional method that requires lengthy source code and time consuming. Quartus II is used to compile and download the design to the DE2 board. Then, under Nios II EDS, C programming language is used to code the hand sign translation algorithm. Being able to recognize the hand sign signal from images can helps human in controlling a robot and other applications which require only a simple set of instructions provided a CMOS sensor is included in the system.
An overview on Advanced Research Works on Brain-Computer InterfaceWaqas Tariq
A brain–computer interface (BCI) is a proficient result in the research field of human- computer synergy, where direct articulation between brain and an external device occurs resulting in augmenting, assisting and repairing human cognitive. Advanced works like generating brain-computer interface switch technologies for intermittent (or asynchronous) control in natural environments or developing brain-computer interface by Fuzzy logic Systems or by implementing wavelet theory to drive its efficacies are still going on and some useful results has also been found out. The requirements to develop this brain machine interface is also growing day by day i.e. like neuropsychological rehabilitation, emotion control, etc. An overview on the control theory and some advanced works on the field of brain machine interface are shown in this paper.
Virtual teams are used more and more by companies and other organizations to receive benefits. They are a great way to enable teamwork in situations where people are not sitting in the same physical place at the same time. As companies seek to increase the use of virtual teams, a need exists to explore the context of these teams, the virtuality of a team and software that may help these teams working virtualy. Virtual teams have the same basic principles as traditional teams, but there is one big difference. This difference is the way the team members communicate. Instead of using the dynamics of in-office face-to-face exchange, they now rely on special communication channels enabled by modern technologies, such as e-mails, faxes, phone calls and teleconferences, virtual meetings etc. This is why this paper is focused on the issues regarding virtual teams, and how these teams are created and progressing in Albania.
USEFul: A Framework to Mainstream Web Site Usability through Automated Evalua...Waqas Tariq
A paradox has been observed whereby web site usability is proven to be an essential element in a web site, yet at the same time there exist an abundance of web pages with poor usability. This discrepancy is the result of limitations that are currently preventing web developers in the commercial sector from producing usable web sites. In this paper we propose a framework whose objective is to alleviate this problem by automating certain aspects of the usability evaluation process. Mainstreaming comes as a result of automation, therefore enabling a non-expert in the field of usability to conduct the evaluation. This results in reducing the costs associated with such evaluation. Additionally, the framework allows the flexibility of adding, modifying or deleting guidelines without altering the code that references them since the guidelines and the code are two separate components. A comparison of the evaluation results carried out using the framework against published evaluations of web sites carried out by web site usability professionals reveals that the framework is able to automatically identify the majority of usability violations. Due to the consistency with which it evaluates, it identified additional guideline-related violations that were not identified by the human evaluators.
Robot Arm Utilized Having Meal Support System Based on Computer Input by Huma...Waqas Tariq
A robot arm utilized having meal support system based on computer input by human eyes only is proposed. The proposed system is developed for handicap/disabled persons as well as elderly persons and tested with able persons with several shapes and size of eyes under a variety of illumination conditions. The test results with normal persons show the proposed system does work well for selection of the desired foods and for retrieve the foods as appropriate as usersf requirements. It is found that the proposed system is 21% much faster than the manually controlled robotics.
Dynamic Construction of Telugu Speech Corpus for Voice Enabled Text EditorWaqas Tariq
In recent decades speech interactive systems have gained increasing importance. Performance of an ASR system mainly depends on the availability of large corpus of speech. The conventional method of building a large vocabulary speech recognizer for any language uses a top-down approach to speech. This approach requires large speech corpus with sentence or phoneme level transcription of the speech utterances. The transcriptions must also include different speech order so that the recognizer can build models for all the sounds present. But, for Telugu language, because of its complex nature, a very large, well annotated speech database is very difficult to build. It is very difficult, if not impossible, to cover all the words of any Indian language, where each word may have thousands and millions of word forms. A significant part of grammar that is handled by syntax in English (and other similar languages) is handled within morphology in Telugu. Phrases including several words (that is, tokens) in English would be mapped on to a single word in Telugu.Telugu language is phonetic in nature in addition to rich in morphology. That is why the speech technology developed for English cannot be applied to Telugu language. This paper highlights the work carried out in an attempt to build a voice enabled text editor with capability of automatic term suggestion. Main claim of the paper is the recognition enhancement process developed by us for suitability of highly inflecting, rich morphological languages. This method results in increased speech recognition accuracy with very much reduction in corpus size. It also adapts Telugu words to the database dynamically, resulting in growth of the corpus.
An Improved Approach for Word Ambiguity RemovalWaqas Tariq
Word ambiguity removal is a task of removing ambiguity from a word, i.e. correct sense of word is identified from ambiguous sentences. This paper describes a model that uses Part of Speech tagger and three categories for word sense disambiguation (WSD). Human Computer Interaction is very needful to improve interactions between users and computers. For this, the Supervised and Unsupervised methods are combined. The WSD algorithm is used to find the efficient and accurate sense of a word based on domain information. The accuracy of this work is evaluated with the aim of finding best suitable domain of word. Keywords: Human Computer Interaction, Supervised Training, Unsupervised Learning, Word Ambiguity, Word sense disambiguation
Parameters Optimization for Improving ASR Performance in Adverse Real World N...Waqas Tariq
From the existing research it has been observed that many techniques and methodologies are available for performing every step of Automatic Speech Recognition (ASR) system, but the performance (Minimization of Word Error Recognition-WER and Maximization of Word Accuracy Rate- WAR) of the methodology is not dependent on the only technique applied in that method. The research work indicates that, performance mainly depends on the category of the noise, the level of the noise and the variable size of the window, frame, frame overlap etc is considered in the existing methods. The main aim of the work presented in this paper is to use variable size of parameters like window size, frame size and frame overlap percentage to observe the performance of algorithms for various categories of noise with different levels and also train the system for all size of parameters and category of real world noisy environment to improve the performance of the speech recognition system. This paper presents the results of Signal-to-Noise Ratio (SNR) and Accuracy test by applying variable size of parameters. It is observed that, it is really very hard to evaluate test results and decide parameter size for ASR performance improvement for its resultant optimization. Hence, this study further suggests the feasible and optimum parameter size using Fuzzy Inference System (FIS) for enhancing resultant accuracy in adverse real world noisy environmental conditions. This work will be helpful to give discriminative training of ubiquitous ASR system for better Human Computer Interaction (HCI). Keywords: ASR Performance, ASR Parameters Optimization, Multi-Environmental Training, Fuzzy Inference System for ASR, ubiquitous ASR system, Human Computer Interaction (HCI)
Interface on Usability Testing Indonesia Official Tourism WebsiteWaqas Tariq
Ministry of Tourism and Creative Economy of The Republic of Indonesia must meet the wide audience various needs and should reach people from all levels of society around the world to provide Indonesia tourism and travel information. This article will gives the details in the evolution of one important component of Indonesia Official Tourism Website as it has grown in functionality and usefulness over several years of use by a live, unrestricted community. We chose this website to see the website interface design and usability and to popularize Indonesia tourism and travel highlights. The analysis done by looking at the criteria specified for usability testing. Usability testing measures are the ease of use (effectiveness, efficiency, consistency and interface design), easy to learn, errors and syntax which is related to the human computer interaction. The purpose of this article is to test the usability level of the website, analyze the website interface design, and provide suggestions for improvements in Indonesia Official Tourism Website of analysis we have done before.
Monitoring and Visualisation Approach for Collaboration Production Line Envir...Waqas Tariq
In this paper, a tool, called SPMonitor, to monitor and visualize of run-time execution productive processes is proposed. SPMonitor enables dynamically visualizing and monitoring workflows running in a system. It displays versatile information about currently executed workflows providing better understanding about processes and the general functionality of the domain. Moreover, SPMonitor enhances cooperation between different stakeholders by offering extensive communication and problem solving features that allow actors concerned to react more efficiently to different anomalies that may occur during a workflow execution. The ideas discussed are validated through the study of real case related to airbus assembly lines.
Hand Segmentation Techniques to Hand Gesture Recognition for Natural Human Co...Waqas Tariq
This work is the part of vision based hand gesture recognition system for Natural Human Computer Interface. Hand tracking and segmentation are the primary steps for any hand gesture recognition system. The aim of this paper is to develop robust and efficient hand segmentation algorithm where three algorithms for hand segmentation using different color spaces with required morphological processing have were utilized. Hand tracking and segmentation algorithm (HTS) is found to be most efficient to handle the challenges of vision based system such as skin color detection, complex background removal and variable lighting condition. Noise may contain, sometime, in the segmented image due to dynamic background. An edge traversal algorithm was developed and applied on the segmented hand contour for removal of unwanted background noise.
Vision Based Gesture Recognition Using Neural Networks Approaches: A ReviewWaqas Tariq
The aim of gesture recognition researches is to create system that easily identifies gestures, and use them for device control, or convey in formations. In this paper we are discussing researches done in the area of hand gesture recognition based on Artificial Neural Networks approaches. Several hand gesture recognition researches that use Neural Networks are discussed in this paper, comparisons between these methods were presented, advantages and drawbacks of the discussed methods also included, and implementation tools for each method were presented as well.
Model Attribute Check Company Auto PropertyCeline George
In Odoo, the multi-company feature allows you to manage multiple companies within a single Odoo database instance. Each company can have its own configurations while still sharing common resources such as products, customers, and suppliers.
How to Split Bills in the Odoo 17 POS ModuleCeline George
Bills have a main role in point of sale procedure. It will help to track sales, handling payments and giving receipts to customers. Bill splitting also has an important role in POS. For example, If some friends come together for dinner and if they want to divide the bill then it is possible by POS bill splitting. This slide will show how to split bills in odoo 17 POS.
Welcome to TechSoup New Member Orientation and Q&A (May 2024).pdfTechSoup
In this webinar you will learn how your organization can access TechSoup's wide variety of product discount and donation programs. From hardware to software, we'll give you a tour of the tools available to help your nonprofit with productivity, collaboration, financial management, donor tracking, security, and more.
Read| The latest issue of The Challenger is here! We are thrilled to announce that our school paper has qualified for the NATIONAL SCHOOLS PRESS CONFERENCE (NSPC) 2024. Thank you for your unwavering support and trust. Dive into the stories that made us stand out!
Synthetic Fiber Construction in lab .pptxPavel ( NSTU)
Synthetic fiber production is a fascinating and complex field that blends chemistry, engineering, and environmental science. By understanding these aspects, students can gain a comprehensive view of synthetic fiber production, its impact on society and the environment, and the potential for future innovations. Synthetic fibers play a crucial role in modern society, impacting various aspects of daily life, industry, and the environment. ynthetic fibers are integral to modern life, offering a range of benefits from cost-effectiveness and versatility to innovative applications and performance characteristics. While they pose environmental challenges, ongoing research and development aim to create more sustainable and eco-friendly alternatives. Understanding the importance of synthetic fibers helps in appreciating their role in the economy, industry, and daily life, while also emphasizing the need for sustainable practices and innovation.
Palestine last event orientationfvgnh .pptxRaedMohamed3
An EFL lesson about the current events in Palestine. It is intended to be for intermediate students who wish to increase their listening skills through a short lesson in power point.
Operation “Blue Star” is the only event in the history of Independent India where the state went into war with its own people. Even after about 40 years it is not clear if it was culmination of states anger over people of the region, a political game of power or start of dictatorial chapter in the democratic setup.
The people of Punjab felt alienated from main stream due to denial of their just demands during a long democratic struggle since independence. As it happen all over the word, it led to militant struggle with great loss of lives of military, police and civilian personnel. Killing of Indira Gandhi and massacre of innocent Sikhs in Delhi and other India cities was also associated with this movement.
This is a presentation by Dada Robert in a Your Skill Boost masterclass organised by the Excellence Foundation for South Sudan (EFSS) on Saturday, the 25th and Sunday, the 26th of May 2024.
He discussed the concept of quality improvement, emphasizing its applicability to various aspects of life, including personal, project, and program improvements. He defined quality as doing the right thing at the right time in the right way to achieve the best possible results and discussed the concept of the "gap" between what we know and what we do, and how this gap represents the areas we need to improve. He explained the scientific approach to quality improvement, which involves systematic performance analysis, testing and learning, and implementing change ideas. He also highlighted the importance of client focus and a team approach to quality improvement.
2024.06.01 Introducing a competency framework for languag learning materials ...Sandy Millin
http://sandymillin.wordpress.com/iateflwebinar2024
Published classroom materials form the basis of syllabuses, drive teacher professional development, and have a potentially huge influence on learners, teachers and education systems. All teachers also create their own materials, whether a few sentences on a blackboard, a highly-structured fully-realised online course, or anything in between. Despite this, the knowledge and skills needed to create effective language learning materials are rarely part of teacher training, and are mostly learnt by trial and error.
Knowledge and skills frameworks, generally called competency frameworks, for ELT teachers, trainers and managers have existed for a few years now. However, until I created one for my MA dissertation, there wasn’t one drawing together what we need to know and do to be able to effectively produce language learning materials.
This webinar will introduce you to my framework, highlighting the key competencies I identified from my research. It will also show how anybody involved in language teaching (any language, not just English!), teacher training, managing schools or developing language learning materials can benefit from using the framework.
How to Create Map Views in the Odoo 17 ERPCeline George
The map views are useful for providing a geographical representation of data. They allow users to visualize and analyze the data in a more intuitive manner.
Chapter 3 - Islamic Banking Products and Services.pptx
Determination of Software Release Instant of Three-Tier Client Server Software System
1. Yogesh Singh & Pradeep Kumar
International Journal of Software Engineering (IJSE), Volume (1): Issue (3) 51
Determination of Software Release Instant of Three-Tier Client
Server Software System
Yogesh Singh ys66@rediffmail.com
Professor & COE University School of Information Technology
Guru Gobind Singh Indraprastha University, Kashmere Gate,
Delhi - 110403, India
Pradeep Kumar pksharma26@rediffmail.com
Associate Professor, Department of Information Technology
ABES Engineering College affiliated to UPTU Lucknow,
Ghaziabad - 201009, India
Abstract
Quality of any software system mainly depends on how much time testing take
place, what kind of testing methodologies are used, the complexity of software
and the amount of efforts put by software developers subject to the cost and time
constraint. More time developers spend on testing more errors can be removed
leading to better reliable software. On the contrary, if testing time is too short, the
software cost could be reduced, but in that case the customers may take a higher
risk of buying unreliable software. However, this will increase the cost during
operational phase since it is more expensive to fix an error during operational
phase than during testing phase. Therefore it is essentially important to decide
when to stop testing and release the software to customers based on cost and
reliability assessment. In this paper we present a mechanism of when to stop
testing process and release the software to end-user by developing software cost
model with risk factor. Based on the proposed method we specifically address
the issues of how to decide that now we should stop testing and release the
software that is based on three-tier client server architecture which would
facilitates software developers to ensure on-time delivery of a software product
matching the criteria of attaining a predefined level of reliability and minimizing
the cost. A numerical example has been cited to illustrate the experimental
results showing significant improvements over the conventional statistical models
based on NHPP.
Keywords: Software Reliability Growth Model (SRGM), Optimal Release Policy, Three-tier Client server
System
1. INTRODUCTION
Several software cost models and optimal release policies have been studied for modeling
software reliability growth trends with different predictive capabilities at different phases of testing.
Software Reliability Growth Models (SRGMs) have been known as most widely used
mathematical tools for measuring, assessing, and predicting software reliability quantitatively. The
project managers and practitioners of software development have a great challenge of how to
develop a reliable software system economically that can be used for reliability assessment in a
2. Yogesh Singh & Pradeep Kumar
International Journal of Software Engineering (IJSE), Volume (1): Issue (3) 52
realistic environment. As one of the major issues is to decide when to stop testing and release the
software to customer timely at low price with high degree of reliability.
SRGMs associated with software reliability measurement structure enhance both developer and
customer understanding of software quality and the factors affecting it. The factors include time
for how long a program has been executing, software product characteristics, development
process characteristics including resources, and operational environment in which the software is
used. Since early 1970s, software reliability modeling has been in practice to model past failure
data to predict future behavior. This approach employs either the observed number of failures
discovered per time period or observed time between failures of software. Software reliability
models therefore fall into two basic classes, depending upon types of data the model uses:
failures per time period and time between failures. Basically one of the well-known and most
important applications of SRGMs is to determine the software release instant [1, 2, 3, 7, 8, 9, 10,
11, 12, 14, 17]. In our study we investigate that how software faults detection process can be
employed to develop software reliability models to predict the behavior of failure occurrences and
the fault content of a software product that can be used in the determination of software release
instant.
Rest of the paper is organized as follows: In section 2, we discuss in detail the motivational work
done in the field of software reliability growth modeling and release policy. Section 3 describes
the mathematical formulation of software risk cost model and in section 4, numerical example is
provided to examine the optimal testing policies for proposed model. The concluding remarks and
directions for future work are discussed in section 5.
2. RELATED WORK
Many researchers and practitioners have addressed the problem of software release instant over
the years particularly Okumoto and Goel (1980) discussed a cost model addressing linear
development cost during testing and operational phase. Yamada (1983) developed S-shaped
reliability growth model for software error detection. Yamada and Osaki (1986) presented an
optimal software release policy for a non-homogeneous software error detection rate model.
Othera and Yamada (1990) discussed optimum software release time problem with fault-
detection during operation by introducing two evaluation criteria for the problem, first software
reliability and second mean time between failures. Yamada (1991) discussed software reliability
measurement and assessment of various software reliability growth models and data analysis.
KK Aggarwal and Y Singh (1993) presented a method for determination of software release
instant using a non-homogeneous error detection rate model based on the fact that some faults
can be regenerated during the process of correction. Pham (1996) developed a cost model with
an imperfect debugging and random life cycle besides a penalty cost to determine optimal
release policies for a software system. Kimura et al. (1999) discussed optimal software release
policy with consideration of an operational warranty period during which developer has to pay the
cost for fixing any detected errors. Pham and Zhang (1998) developed a generalized cost model
including fault removal cost, warranty cost and software risk cost due to software failures. They
also developed a GUI tool to determine the optimal software release time. Pham and Zhang
(1999) reviewed optimum release policy literature and concluded that quality of software system
depends on how much time testing takes and what kind of testing methodologies are used.
Hoang Pham (2003) categorically studied software reliability modeling based on
nonhomogeneous Poisson process (NHPP) with environmental factors and cost factors. Chin
Huang (2005) reviewed software reliability growth modeling with generalized logistic testing-effort
function and concluded that generalized logistic testing-effort function can be used to describe
actual consumption of resources during the software development process. Kuei-Chen Chiu et al.
(2007) proposed in their study that perspective of learning effects can influence the process of
learning effect that comes from inspecting the testing /debugging codes. Chu and Huang (2008)
3. Yogesh Singh & Pradeep Kumar
International Journal of Software Engineering (IJSE), Volume (1): Issue (3) 53
further enhanced the predictive capabilities of testing effort dependent software reliability models
by introducing multiple change-points into Weibull-type testing-effort functions.
2.1 Software Reliability Growth Model for Three-Tier Client Server System
In a distributed computing environment to improve the process of reliability estimation and
prediction of software products we discuss and describe a three-tier client server architecture
based system for error detection process during testing phase. However reliability can be
enhanced through various means such as improving the process of designing, effectiveness of
testing, manual & automated inspections, familiarization with developers, users & product, and
improving the management processes & decisions [1, 2]. The rate of reliability growth depends on
the factors related to how rapidly defects are identified, how fast corrective action take place &
how soon the impact of the changes is implemented in the operational phase. Nevertheless all
preventive measures need to be taken during fault detection in order to correct and freeze them.
To formulate our methodology we consider a conventional client server architecture where
presentation logic and application logic are split off into separate components resulting into three-
tier system shown in figure1.
FIGURE 1: A Three-Tier Client Server Architecture View of the proposed model
The presentation layer of proposed model contains forms providing user interface, display data,
collect user inputs and sends the requests to next layer. Application layer provides the support
services to receive the requests for data from user tier, evaluates against business rules, and
pass on them to data tier. Data layer includes data access logic and to store the data at backend.
In modern computing system particularly for web based applications where various modules of
software are executed on different machine under different network architecture and operating
conditions we apply software cost model with risk factor to make a realistic reliability prediction
and assessment.
2.2 Terminology
NHPP: nonhomogeneous Poisson process represents the number of failures experienced up to
time t i.e., {N (t), t ≥ 0}. The NHPP based model provides an analytical framework for describing
the software failure phenomenon during testing phase.
Testing-effort: resource expenditures spent on software testing, e.g., test cases, man-power,
CPU time etc.
Fault: an incorrect logic, incorrect instruction, or inadequate instruction that upon execution will
cause a failure.
Error: a cause of a failure, which is unacceptable departure from nominal program operation.
Software error: an error made by a programmer or designer, such as a typographical error or an
incorrect numerical value or an omission, etc.
Operational profile: the set of operations that the software can execute, given the probabilities of
their occurrence.
2.3 Acronyms
MLE maximum likelihood estimation
MVF mean value function
Front End sending request Back End
Client Application Server Database Server
Sending reply
Level 1
Presentation
Layer
Presentation
Logic
Level 2
Business
Layer
Application
Logic
Level 3
Data Layer
Data Access
Logic
Database
4. Yogesh Singh & Pradeep Kumar
International Journal of Software Engineering (IJSE), Volume (1): Issue (3) 54
SRGM software reliability growth model
SSE sum of squared errors
2.4 Notations used
m(t) mean value function in NHPP model
a Total number of software errors to be detected
bi error correction rate during initial testing phase of i
th
layer of model for i=1,2,3
ri error generation factor due to correction of errors in initial testing phase of i
th
layer of the
model
ti time spent in initial testing phase at i
th
layer of model for i=1,2,3
t total time spent in all three phases of testing
λ(t) Fault detection rate per unit time
T Software release time
C1 Software test cost per unit time
C2 Cost of removing each error per unit time during testing
C3 Cost of risk due to software failure
E(T) Expected total cost of a software system by time T
N(T) Number of errors to be detected by time T
µy Expected time to remove an error during testing phase which is E(Y)
Y Time to remove an error during testing phase
R(x|t) conditional software reliability
2.5 Assumptions
The proposed software cost model is developed based on the following assumptions under
different circumstances as follows:
1. Initially there is a set-up cost of the software development process.
2. Cost to perform testing is proportional to testing time.
3. Cost to remove errors during testing phase is proportional to total time of removing all
errors detected by the end of testing phase.
4. Time to remove each error during testing follows a truncated exponential distribution.
5. There is a risk cost related to the reliability at each release time point.
2.6 A nonhomogeneous Poisson process model
The counting process {N(t), t ≥ 0} is known as NHPP with an intensity function λ(t), t ≥ 0 and N(t)
has a Poisson distribution with a mean value function m(t) given by:
Pr {N(t) =k} = [m(t)]
k
exp {-m(t)} / k! , where k = 0,1,2,…n. and (1)
m(t) = E[N(t)] is the mean value function.
The Pr {N(t)} denotes the probability of event N(t), the mean value function m(t) represents expected
cumulative number of faults detected during testing time interval (0,t] and intensity function λ(t) representing
fault detection rate per fault. Using Goel-Okumoto NHPP reliability model the mean value function m(T) can
be written as follows:
m(T) = a ( 1 – exp{-bT}) , where a>0 and b>0
(2)
3. SOFTWARE COST MODEL WITH RISK FACTOR
Here we describe mathematically a software cost model with risk factor for three-tier client server
system consisting of three type of faults where some faults are easier to detect then others based
on the amount of efforts required to detect causes of failure in order to fix and remove it. These
faults are associated with presentation layer, business layer and database layer during testing
phase addressing risk level and time to remove errors. The optimal release policy that minimizes
the expected total software cost is obtained without loss of generality, by using mean value
function m (T) given as follows:
5. Yogesh Singh & Pradeep Kumar
International Journal of Software Engineering (IJSE), Volume (1): Issue (3) 55
3
m(T) = a ∑ (1 – exp{-bi Ti}) *(1- ri ) (3)
i=1
Where t = t1 + t2 + t3 , a > 0, and 0 < b3 < b2 < b1 < 1, 0 < ri < 1
For three types of fault at each layer the error detection rate function dm(T) / dT can be written
as: 3
λ(T) = a ∑ bi exp{-biTi} *(1- ri ) (4)
i=1
The probability of a software failure which does not occur in (T, T+x], given that last failure
occurred in T >=0 (x>=0) is defined as:
R(x | T) = exp[-{m(T+x) – m(T)}] (5)
By substituting the values from eqn. (3) we get
3 3
R(x | T) = exp(–) a ∑ (1 – exp{-bi (Ti +x)})*(1- ri ) – a ∑ {1 – exp(-biTi )}*(1- ri ) (6)
i=1 i=1
Also it is observed that R(x | T) and λ(T) are strictly decreasing function of T, i.e.,
3
R(x | 0) = exp(–) a ∑ (1 – exp{-bix})*(1- ri ) (7)
i=1
3
λ(0) = a∑ bi(1- ri ) and λ(∞) =0 (8)
i=1
Therefore the total expected software system cost, E(T) can be defined as: (i) cost to perform
testing; (ii) cost incurred in removing errors during the testing phase; and (iii) a risk cost due to
software failure.
The cost to perform testing can be defined as
E1(T) = C1T (9)
The expected total time to remove all N(T) can be expressed using Zhang [8] as:
N(T)
E2(T) = E ∑ Yi = E[N(T)]* E[Yi] = m(T)µy (10)
i=1
where µy = [1 – (λT0 +1)*exp{-λT0} ] / [λ(1 - exp{-λT0}]
Also the expected cost to remove all errors detected by time T can be written as:
N(T)
E2(T) = C2 E[∑ Yi ] = C2m(T)µy (11)
i=1
The risk cost due to software failure after releasing software is E3 (T) = C3 [1- R(x |T)], where C3
is cost due to software failure. Assuming T is to be release time of the software, total cost
incurred during SDLC, the expected total software cost can be expressed using Zhang 1998 [8]
as follows:
E(T) = C1(T) + C2 m(T) µy + C3[1 – R(x|T)] (12)
By substituting the values from eqn. (6), (7) and (8) we get
3
E(T) = C1(T) + C2 a ∑ [{1 – exp(-biTi )} *(1- ri )] * [1 – (λT0 +1)*exp{-λT0}] / [λ(1 - exp{-λT0}]
i=1
3 3
+ C3 1 – exp (–) a ∑ (1 – exp {-bi (Ti +x)})*(1- ri ) – a ∑ {1 – exp(-biTi )}*(1- ri )
i=1 i=1
(13)
6. Yogesh Singh & Pradeep Kumar
International Journal of Software Engineering (IJSE), Volume (1): Issue (3) 56
3.2 Optimal Release Policy
Here we discuss the behavior of the software cost model given in eq. (12) and determine optimal
release time T* that minimizes the expected software cost of the system subject to attaining a
desired reliability level, R0 the optimization problem can be characterized as follows:
Minimizing E(T) given as in eq.(12) subject to {R(x | T) ≥ R0 }
Differentiating eq.(12) with respect to T and equating them to zero we get the optimal testing time
T* as follows:
3
d E(T) = C1 + C2 a∑ [bi exp{-biTi}]* [(1- ri )] * [1 – (λT0 +1) *exp{-λT0}] / [λ(1 - exp{-λT0}]
dT
i=1
3 3
+ C3 exp (–) a∑ (1 – exp {-bi (Ti +x)})*(1- ri ) – a ∑ (1 – exp{-biTi})*(1- ri )
i =1 i =1
3
* a ∑ bi exp {-bi Ti} *(1- ri ) exp{-bix} –1 = 0
i=1
(14)
T* = T1 can be represented as:
C1 = - {C2 A + C3 B} (15)
3
Where A = a∑ [bi exp{-biTi}]* [(1- ri )] * [1 – (λT0 +1) *exp{-λT0}] / [λ(1 - exp{-λT0} ]
i=1
3 3
B = exp (–) a∑ (1 – exp {-bi (Ti +x)})*(1- ri ) – a ∑ (1 – exp{-biTi})*(1- ri )
i =1 i =1
3
* a ∑ bi exp {-bi Ti} *(1- ri ) exp{-bix} –1
i =1
The second derivative with respect to T of equation (12) yields:
3
d
2
E(T) = C2 a∑ [ (-)b
2
i exp{-biTi}]* [(1- ri )] * [1 – (λT0 +1) *exp{-λT0}] / [λ(1 - exp{-λT0}]
dT
2 i=1
3 3
+ C3 exp (–) a ∑ (1 – exp {-bi (Ti +x)}) *(1- ri ) – a∑ (1 – exp {-biTi}) *(1- ri )
i=1 i=1
3 3
* a ∑ bi exp {-bi Ti} *(1- ri ) exp{-bix} –1 a∑ bi exp{-biTi}*(1- ri ) exp{-bix} –1– bi
i=1
i=1
(16)
Let: 3
h(T) = a∑ bi *(1- ri ) * (exp {-bi Ti} ) and h(T) ≥ 0 ∀ T (17)
i=1
3 3
g(T) = C3 exp (–) a ∑ (1 – exp {-bi (Ti +x)}) *(1- ri ) – a∑ (1 – exp {-biTi}) *(1- ri )
i=1 i=1
3 3
* ∑ exp{-bi x} – 1
2
– ∑ bi exp{-bix} –1
i=1 i=1
(18)
3
v(T) = - C2 ∑ bi [1 – (λT0 +1) *exp{-λT0}] / [λ(1 - exp{-λT0}
i=1
(19)
We can rewrite eq. (14) by using eq.(15) to eq. (17) as below:
d
2
E(T) = h(T) v(T) + g(T) (20)
dT
2
Using eq. (17) we can see that d
2
E(T) ≥ 0 | T =T1
7. Yogesh Singh & Pradeep Kumar
International Journal of Software Engineering (IJSE), Volume (1): Issue (3) 57
dT
2
Where h(T), g(T), v(T), bi, Ti , x, ri all are positive values defined in equations (12) to equations
(20) and the objective function E(T) can be strictly decreasing, increasing or both in T depending
upon the solutions obtained from these equations respectively. Therefore E(T) yields the
minimum value at T* = T1 for the following policies:
Optimum Release Policy 1:
T* = T1 when λ(0) ≥ λ(T1)
Optimum Release Policy 2:
T* = 0 when λ(0) < λ(T1)
Now let TR denote the optimal testing time satisfying the condition {R(x|T) ≥ R0} we can minimize
E(T) as follows:
Optimum Release Policy 3:
(a) If λ(0) ≥ λ(T1) and R(x|0) < R0 then T* = max (T1, TR)
(b) If λ(0) > λ(T1) and R(x|0) ≥ R0 then T* = T1
(c) If λ(0) ≤ λ(T1) and R(x|0) < R0 then T* = TR
(d) If λ(0) ≤ λ(T1) and R(x|0) ≥ R0 then T* = 0
4. NUMERICAL EXAMPLE
In this section we present a numerical example to illustrate the determination of optimal release
policies of proposed model. Testing data has been collected from Misra [5] summarizing the
number of failures per one-hour interval of execution time. We have applied this data to proposed
model for fitting the data using MATLAB version 7.0.1 under Windows XP environment, assuming
that testing staff are working for 10 hrs per day and five days a week.
Model Name Mean Value Function m (T) SSE
Goel-Okumoto [14] m(T) = a (1 – exp{-bT}) 766.1
Yamada-Ohba [15] m(T) = a 1 – (1 + bT) exp{-bT} 592.1
Proposed Model
3
m(T) =a ∑ (1 – exp{-bi Ti}) *(1- ri )
i=1
241.7
TABLE 1: Comparison of the models
The criteria used for determining the goodness of fit is the Sum of Square Error (SSE). This
statistic measures the deviation of the responses from the fitted values of the responses. A value
closer to 0 indicates that the model has a smaller random error component and the fit will be
more useful for prediction. The model that produces the smallest SSE has the better performance
and can be expressed as follows:
SSE = Sum { i =1 to 25} [yi – fi]
2
(21)
where yi is the observed value and fi is the predicted value from the fit. From Table 1 we observe
that the proposed methodology fit the data to a greater extent than the other two models.
Therefore we apply the proposed model to fit the data and in the determination of software
release instant.
4.1 The impact of cost coefficients on the expected total cost
The impact of cost coefficients C1, C2, and C3 on expected total cost has been evaluated under
different conditions. We increase the values of C1, C2, C3 and keep the values of other
parameters unchanged without lack of generality. The parameters of present model are estimated
by using maximum likelihood estimation (MLE) method and other related parameters are as
follows: Expected total potential error = 143.21, b1=0.8736, b2=0.6094, b3=0.1942, r1=0.7536,
r2=0.5104, r3=0.0272 and mean value function m (T) = 0.4248.
8. Yogesh Singh & Pradeep Kumar
International Journal of Software Engineering (IJSE), Volume (1): Issue (3) 58
Case I:
C1=$50/day, C2=$100/day, C3=$150/day, µy =0.1 and x=0.05
Case II:
C1=$150/day, C2=$100/day, C3=$50/day, µy =0.1 and x=0.05
Case III:
C1=$50/day, C2=$100/day, C3=$200/day, µy =0.2 and x=0.05
4.2 Observations
• From the results for three different cases mentioned above we observed that increasing the
cost factor C1 and C3 results in increasing total expected cost initially very high but then
decreases gradually, which has a significant impact on optimal release policies of software
product. In other words if developers don’t spend sufficient amount of time for testing before
release then it is going to be more risky and unreliable for the customer for obvious reason
because to remove an error after delivery require more effort and involve more risk which
results in a longer testing time.
• We also observe that even if we consider cost factor C2 as constant in all three cases,
increase in cost factor C1 still increases cost factor C3 but then expected time to remove error
µy becomes double, which is quite encouraging reasonably. We summarize the result of total
expected cost of software, expected number of errors to be detected by time T with reliability
objective keeping more than 90%.
• Based on the calculations for case I, we find the total expected cost E1 (T*)=$382.70 and
reliability of the software application at the end of testing on 4
th
day is 0.9127 that is more
than 90%. After changing the cost parameters, we get total expected cost E2 (T*)=$661.39 for
the reliability assessment at 0.9018 (> 90%) marginally low as in case II.
• Finally in case III, we achieve the reliability level of 0.9239 (> 92%) at the cost of
E3(T*)=$509.28 after improving the software continuously in operational phase and which is
very satisfactory. The summary of results shown in Table 2 and figure 2 to figure 5 show the
variation of the total expected testing cost, the reliability achieved at the end of testing phase
and the expected number of errors detected at release instant.
• Furthermore the validity of proposed assessment method heavily depends upon the
representation of software reliability failure data available through various sources is highly
fluctuating and not being updated frequently by the community of researchers which is out of
the scope of this paper and need to be addressed separately in near future.
9. Yogesh Singh & Pradeep Kumar
International Journal of Software Engineering (IJSE), Volume (1): Issue (3) 59
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
TABLE 2: Summary of Total Expected Cost E (T), R (x |T) and m (T)
TABLE 2: Summary of total expected cost E(T), R(x|T) and m(T)
Expected cost vs Testing time
0
1000
2000
3000
4000
0 10 20 30
Time (in days)
Cost(US$)
E(T)
E1(T)
E2(T)
FIGURE 2: Release Instant and Total Expected Cost for Different Cost Factors
Release
time T
Expected
total cost
E1 (T)
Case I
Expected
total cost
E2 (T)
Case II
Expected
total cost
E3 (T)
Case III
Expected no.
of errors to be
detected m(T)
Conditional
Reliability
R (x|T)
1 587.24 718.26 1155.50 58.38 0.9002
2 457.21 661.39* 818.59 36.35 0.9018
3 399.48 694.61 644.09 24.22 0.9112
4 382.70* 774.97 557.67 17.11 0.9127
5 388.88 880.73 519.61 12.67 0.9185
6 408.44 1000.84 509.28* 9.70 0.9239
7 436.25 1129.52 515.77 7.62 0.9327
8 469.44 1263.65 533.09 6.08 0.9421
9 506.35 1401.46 557.81 4.90 0.9510
10 545.98 1541.87 587.84 3.98 0.9589
11 587.63 1684.21 621.84 3.25 0.9657
12 630.87 1828.03 658.90 2.66 0.9715
13 675.36 1973.00 698.37 2.18 0.9764
14 720.85 2118.90 739.76 1.79 0.9805
15 767.16 2265.54 782.70 1.47 0.9839
16 814.12 2412.79 826.91 1.21 0.9867
17 861.62 2560.53 872.15 1.00 0.9890
18 909.57 2708.67 918.23 0.82 0.9910
19 957.88 2857.13 965.01 0.68 0.9925
20 1006.49 3005.87 1012.36 0.56 0.9939
10. Yogesh Singh & Pradeep Kumar
International Journal of Software Engineering (IJSE), Volume (1): Issue (3) 60
Expected total cost vs Testing time
0
200
400
600
800
1000
1200
1400
0 10 20 30
Time (days)
E(T)inUS$
E(T)
FIGURE 3: Expected Cost During Testing Phase
FIGURE 4: Expected Cost with Reliability Achieved During Testing Phase
Expected cost & Reliability
0
20
40
60
80
1 5 9 13 17 21 25
Time (days)
m(T)
0
500
1000
1500
Release Time m(T)
R(x|T) E(T)
FIGURE 5: Cost and Reliability with Mean Value Function m (T) and Release Instant
Expected cost vs Testing time
0
1000
2000
3000
4000
0 10 20 30
Time (in days)
Cost(US$)
m(T)
R(x|T)
E(T)
E1(T)
E2(T)
11. Yogesh Singh & Pradeep Kumar
International Journal of Software Engineering (IJSE), Volume (1): Issue (3) 61
5. CONCLUSION & FUTURE WORK
Practically project managers need to know when testing the software can be stopped so that they
can deliver the product to customers attaining the requirement of software quality and minimize
the related testing costs. In this paper we have formulated a release policy for software reliability
growth model under three-tier client server architecture reflecting the cost of postponing software
release based on testing efforts. With the help of proposed cost model and designed release
policies it can be determined that whether more testing is required or the software has been
tested sufficiently to allow its release to the customer for operational use. The results revealed
that proposed model not only has a goodness-of-fit but also offers a good explanation of the
process of software reliability growth. However a study of comparative analysis to evaluate the
effectiveness of the proposed model and other existing software failure models would supplement
the present technique by applying more failure data sets of various standard real life projects.
In near future the proposed model can be extended by considering the change-point problem and
by introducing extended warranty period. The change-point problem results when some factors of
testing process are changed which subsequently can cause the software failure intensity function
to decrease or increase. Whereas by extending warranty period the penalty cost may be reduced
up to a certain level provided the maintenance cost during operational phase is paid by the
customer partially.
Acknowledgement
Authors would like to thank the editor and referees for their useful suggestions and valuable
comments.
REFERENCES
[1] K. K. Aggarwal and Yogesh Singh, “Determination of software release instant using a
nonhomogeneous error detection rate model”. Microelectron Reliability, Vol. 33. No. 6. pp.
803-807, 1993.
[2] K. K. Aggarwal and Yogesh Singh, “Software Engineering: Programs, Documentation & Operating
Procedures”, New Age International Publishers, third edition, pp. 191-324 (2008).
[3] Yogesh Singh and Pradeep Kumar, “A software reliability growth model for three-tier client-server
system”. IJCA, Vol. 1. No. 13. doi. 10.5120/289-451,2010.
[4] Hoang Pham, “System Software Reliability”, Springer Series in Reliability Engineering, pp.
315-344 (2006).
[5] Misra, P.N. “Software reliability analysis models”. IBM Systems Journal (1983), 22,262-70.
[6] www.dacs.org “Software Life Cycle Empirical/Experience Database (SLED) published by
Data & Analysis Center for Software (DACS)”.
[7] Kuei-Chen Chiu, Yeu-Shiang Huang, Tzai-Zang Lee, “A study of software reliability growth
from the perspective of learning effects”. Reliability Engineering and System Safety 93 (2008)
1410-1421.
[8] Xuemei Zhang and Hoang Pham, “A software cost model with warranty cost, error removal
times and risk costs”. IIE Transactions (1998) 30, 1135-1142.
[9] Hoang Pham, “Software reliability and cost models: perspectives, comparison, and practice”.
European Journal of Operational Research 149 (2003) 475-489.
[10] Chin-Yu Huang, “Cost-reliability-optimal release policy for software reliability models
incorporating improvements in testing efficiency”. The Journal of Systems and Software 77
(2005) 139-155.
[11] Chu-Ti Lin, Chin-Yu Huang, “Enhancing and measuring the predictive capabilities of testing-
effort dependent software reliability models”. The Journal of Systems and Software 81 (2008)
1025-1038.
12. Yogesh Singh & Pradeep Kumar
International Journal of Software Engineering (IJSE), Volume (1): Issue (3) 62
[12] Chin-Yu Huang and Sy-Yen Kuo, “Analysis of incorporating logistic testing-effort function into
software reliability modeling”. IEEE Transactions on Reliability, Vol.51, No.3, September
2002.
[13] Yamada S., Othera S and Narihisa H. “Software reliability growth models with testing effort”.
IEEE Transactions on Reliability 1986; 35,pp. 19-23.
[14] Goel AL, Okumoto K. “Time-dependent fault detection rate model for software and other
performance measures”. IEEE Transactions on Reliability 1979; 28:206-11.
[15] Yamada S. Ohba M. “S-shaped software reliability modeling for software error detection”.
IEEE Transactions on Reliability 1983; 32:475-84.
[16] Yamada S., Narihisa H. and Osaki S. “Optimum release policies for a software system with a
scheduled software delivery time”. Int. J. System Science 1984, 15, pp. 905-914.
[17] Yamada S., Narihisa H. and Osaki S. “Optimum software release policies with simultaneous
cost and reliability requirements”. European Journal of Operation Research 1987, 31, pp. 46-
51.
[18] Chin-Yu Huang, Sy-Yen Kuo, Michel R. Lyu, “An assessment of testing-effort dependent
software reliability growth model”. IEEE Transactions on Reliability, Vol. 56,No.2, June 2007.
[19] P K Kapur, R B Garg, S K Kumar, “Contributions to Hardware & Software Reliability” World
Scientific, pp. 89-147 (1999).
[20] Kapur PK, Bhalla VK. “Optimal release policies for a flexible software reliability growth
model”. Reliability Engineering and System Safety 1992; 35:49-54.
[21] Kimura M, Toyota T, Yamada S. “Economic analysis of software release problems with
warranty cost and reliability requirement”. Reliability Engineering and System Safety 1999;
66:49-55.
[22] Pham H. Zhang X. “A software cost model with warranty and risk costs’. IEEE Transaction on
Computers 1999; 48:71-75.
[23] Chin-Yu-Huang, Sy-Yen-Kuo and Michael R. Lyu, ‘Optimum software release policy based on
cost and reliability with testing efficiency”. IEEE 1999.
[24] Pham, H. and Zhang, X. “A software cost model with error removal times and risk costs”.
International Journal of Systems Science (1998), 29, 435-442.
[25] Shinji Inoue and Shigeru Yamada, “Optimal software release policy with change point”. IEEE
978-1-4244-2630-0/08, 2008.