SlideShare a Scribd company logo
Software Reliability Prediction
                   Shishir Kumar Saha                                                     Mirza Mohymen
               Dept. of Software Engineering,                                   Dept. of Software Engineering,
             Blekinge Institute of Technology,                                 Blekinge Institute of Technology,
                     Karlskrona, Sweden.                                                  Karlskrona, Sweden.
                shsd10@student.bth.se                                            mimo10@student.bth.se

ABSTRACT                                                             Model. Using SMERFS3 tools the first release data until 26 th
In this paper, we describe the procedure of analytical analysis      weeks are used to estimate the total number of faults. From
to choose suitable software reliability growth model to forecast     SMERFS3 the five models are selected and after calculation,
the number of defects in second release, base on the failure         the Yamada’s S-Shaped Model’s result was closer to total
data of first release and a few failure data of second release.      number of faults value of first release. The tool gives total no
We recommend an appropriate time for second release to put           of faults 195 (approximately) by Yamada’s S-Shaped Model’s,
into operation, evaluating the study result of fault data.           which is very close to 198 (actual number of total faults).
                                                                     So for second release the Yamada’s S-Shaped Model is used to
Keywords                                                             calculate the total number of faults. After calculate the total
Software reliability prediction, Yamada’s S-Shaped Model,            estimated faults the NHPPM are used for further calculation.
Goel-Okumoto Non-homogeneous Poisson Process Model                   Detection of defects depends on time interval and this model is
                                                                     suitable for failure counts calculation and estimation [2]. Also
1. INTRODUCTION                                                      from data, it shows that the fault is not overlapping in different
Unpredicted errors or defects of software are not only               time interval. So NHPP model is selected.
hampering the business value but also escalating the estimated
development time and cost. So, Software reliability growth           3. RELEASE DATA ANALYSIS
model is crucial to predict the future behavior of the software
[1].                                                                 3.1 Goel-Okumoto Model
                                                                     Goel-Okumoto (G-O) model is a quantitative software
In the report, we have presented our motivation, considering         reliability appraisal model. It depends on Non-homogeneous
several alternatives & properties of model and nature of fault       Poisson Process (NHPP) to predict the software release date.
data, to choose the reliability growth model. Later, we              According to the G-O model, if a is the number of faults in the
calculated the number of predicted defects in second release         software and b is the testing efficiency or the reliability growth
applying the reliability model on historical fault data and          rate, then the mean value or cumulative faults and failure
recommended the time for second release to put into operation        intensity at a given time t can be calculated using the following
considering the uncertainties of method or release date.             formulas [3] –
                                                                     Mean Value,                                         (1)
2. RELIABILITY MODEL SELECTION
                                                                     Failure Intensity,                                  (2)
In the assignment, the 50 weeks fault data of first release and
the 18 weeks fault data of second release are provided. There        Normalizing the likelihood function (1 & 2) we get the
are different categories of analytical models used for software      following equation [3],
reliability measurement. Those are Times Between Failures
Models, Fault Seeding Models, Input Domain Based Models,
Failure Count Models.
Times Between Failures Models is based on time interval of           ... (3)
failures. But according to assignment description and data,
time interval of failure is not given so this model is not chosen.
In Fault Seeding Models defect is seeded but here no seeding         3.2 Fault Detection Rate Calculation &
is occurred, so this category of models are not applicable here.     Failure Data Processing
In Input Domain Based Models the input data is needed to             In order to find out the predicted failure data of second release,
build test cases and observed the failure to measure the             we need to calculate the value of a (estimated number of
reliability. But from the assignment there is no input domain        faults) and b (reliability growth rate or constant quality of
and input data so the model is not suitable here. The Fault          testing). We can get the value of reliability growth rate from
Count Models is based on failures counts in a specified time         the provided first release fault data as the value of b is constant
interval. From the assignment it’s shown that the interval is        for each release, using the equation (3). As the equation (3) is
one week so homogeneous and there are no overlapping faults.         non-linear, we solved it numerically using Newton-Raphson
So Fault Count Models should have chosen.                            method. The value of a = 199.48 and b = 0.098076 [3] from
                                                                     first release fault data. By the help of the value of reliability
Again we know that in Fault Count Models that have several           growth rate b, we can calculate the predicted second release
sub models. As for example: Goel-Okumoto Non-homogeneous             defects per week (19th week to 50th week), what is shown in
Poisson Process Model, Goel Generalized Non-homogeneous              table 1.
Poisson Process Model, Yamada’s S-Shaped Model, Brook's              For example, μ(18) = a*(1-exp(-0.098076*18)) [Here, μ(18) =
and Motley's Binomial Model, Brook's and Motley's Poisson            213]; a = 256.9515
Then, μ(19) = 256.9515*(1-exp(-0.098076*19)); μ(19) =                But we would recommend 41th week or later for release since
217.1069; So, in 19th week the number of predicted defects are       the operational cost become lower than test cost at 41th week.
(217.1069 – 213 = 4.1069)
According to the provided data, the test costs 2 units, defects
in test costs 1 unit and defects in operation costs 5 units per
week. So, table1 shows the total defects and corresponding
cost of second release up to 50th week.

  Table1. Defects and Cost Estimation of Second Release
                         Value of
           Defects
Week                      Defects         Test      Operational
             in
 s                       Detection        Cost         Cost
          Release 2
                           μ(t)
  1           3              3              5            15                              Figure1. Cost Analysis
  2           3               6             5            15

  3           38             44            40            190

  --          --             --             --            --
  18          3             213             5            15

  19       4.1069        217.1069        6.1069       20.5345

  --          --             --             --            --
  33       1.0401        246.8611        3.0401        5.2003
           0.9429        247.8040        2.9429        4.7143               Figure2. Actual Failure and Predicted Failure
  34
  35       0.8548        248.6587        2.8548        4.2738        5. UNCERTAINTIES
           0.7749        249.4336        2.7749        3.8745        The model used for predicting the reliability of fault data is not
  36                                                                 architecture based reliability model. So, it is not possible to get
  37       0.7025        250.1361        2.7025        3.5124        idea about the software architectural style whether it is
                                                                     complex or simple. Also, we had no idea about the nature of
  38       0.6368        250.7730        2.6368        3.1842
                                                                     software e.g. size, requirements, usage profile, application’s
  39       0.5773        251.3503        2.5773        2.8867        termination behavior, organizational structure, type of testing
                                                                     tools and techniques, available & expert resources,
  40       0.5234        251.8737        2.5234        2.6169        reproducibility of bugs, adequate time for re-testing, change
           0.4745        252.3482        2.4745        2.3724        requests, product baseline, development life cycle, reliability
  41                                                                 model used for first release, maximum available budget and
  42       0.4301        252.7783        2.4301        2.1507        schedule. Those factors are very crucial to ensure for a release.
                                                                     Release time may be effected by the factors and create more
  43       0.3900        253.1683        2.3900        1.9498
                                                                     defects than expected. Moreover, it is not possible to pledge,
  44       0.3535        253.5218        2.3535        1.7676        the correction of older bugs would not lead to new bugs in
                                                                     other units.
  45       0.3205        253.8423        2.3205        1.6024

  --          --             --             --            --         6. CONCLUSION
                                                                     Software reliability is very much important for successful
  49       0.2165        254.8514        2.2165        1.0823
                                                                     software. We used software reliability growth model (NHPP)
  50       0.1962        255.0477        2.1962        0.9812        to predicate the faults; analyze the cost and effort for
                                                                     estimating proper release date. Here the given data is faults
                                                                     counts and it’s related to homogeneous time interval, so we
Calculating all steps according to the selected model, we found      used NHPP model for manual calculation and Yamada’s S-
255 (app.) predictable defects in second release. The result is      Shaped Model for precise outcome.
very alike with the calculated results of Yamada Model and
Goel – Okumoto Model.                                                7. REFERENCES
                                                                     [1] Misra, P. 1983. Software Reliability Analysis. IBM
4. RECOMMENDATION ABOUT                                                  Systems Journal. Lang. Vol. 22, No. 3 (1983), 262-270.
SECOND RELEASE                                                       [2] Goel, A. L., Software Reliability Models: Assumptions,
According to the data analysis of table 1 and visual figure of           Limitations, and Applicability, IEEE Transactions on
cost analysis, we can say that the test cost is quite steady after       Software Engineering, 11(12), pp. 1411-1423, Dec. 1985.
38th week, because it decreases the cost value difference            [3] Xie, M.; Hong, G.Y.; Wohlin, C., 1999, Software
between two consecutive weeks in later releases and no major             reliability prediction incorporating information from a
spike or ups and downs is observed. So, we can put the second            similar project. Journal of Systems and Software Volume:
release into operation in any week among 38th to 50th weeks.             49, Pages: 43-48.

More Related Content

What's hot

Mca se chapter_07_software_validation
Mca se chapter_07_software_validationMca se chapter_07_software_validation
Mca se chapter_07_software_validation
Aman Adhikari
 
Software project management
Software project managementSoftware project management
Software project management
Saumya Sahu
 
Software Quality Assurance
Software Quality AssuranceSoftware Quality Assurance
Software Quality Assurance
Vikash Mishra
 
AJP
AJPAJP
Model evolution and versioning
Model evolution and versioningModel evolution and versioning
Model evolution and versioning
Alfonso Pierantonio
 
SE Quiz
SE QuizSE Quiz
Finding latent code errors via machine learning over program ...
Finding latent code errors via machine learning over program ...Finding latent code errors via machine learning over program ...
Finding latent code errors via machine learning over program ...
butest
 
Cause-Effect Graphing: Rigorous Test Case Design
Cause-Effect Graphing: Rigorous Test Case DesignCause-Effect Graphing: Rigorous Test Case Design
Cause-Effect Graphing: Rigorous Test Case Design
TechWell
 
Software testing quiz questions and answers
Software testing quiz questions and answersSoftware testing quiz questions and answers
Software testing quiz questions and answers
RajendraG
 
Reengineering framework for open source software using decision tree approach
Reengineering framework for open source software using decision tree approachReengineering framework for open source software using decision tree approach
Reengineering framework for open source software using decision tree approach
IJECEIAES
 
Maestro_Abstract
Maestro_AbstractMaestro_Abstract
Maestro_Abstract
Hardik Patel
 
Unit1
Unit1Unit1
501 183-191
501 183-191501 183-191
501 183-191
idescitation
 
A WHITE BOX TESTING TECHNIQUE IN SOFTWARE TESTING : BASIS PATH TESTING
A WHITE BOX TESTING TECHNIQUE IN SOFTWARE TESTING : BASIS PATH TESTINGA WHITE BOX TESTING TECHNIQUE IN SOFTWARE TESTING : BASIS PATH TESTING
A WHITE BOX TESTING TECHNIQUE IN SOFTWARE TESTING : BASIS PATH TESTING
Journal For Research
 

What's hot (14)

Mca se chapter_07_software_validation
Mca se chapter_07_software_validationMca se chapter_07_software_validation
Mca se chapter_07_software_validation
 
Software project management
Software project managementSoftware project management
Software project management
 
Software Quality Assurance
Software Quality AssuranceSoftware Quality Assurance
Software Quality Assurance
 
AJP
AJPAJP
AJP
 
Model evolution and versioning
Model evolution and versioningModel evolution and versioning
Model evolution and versioning
 
SE Quiz
SE QuizSE Quiz
SE Quiz
 
Finding latent code errors via machine learning over program ...
Finding latent code errors via machine learning over program ...Finding latent code errors via machine learning over program ...
Finding latent code errors via machine learning over program ...
 
Cause-Effect Graphing: Rigorous Test Case Design
Cause-Effect Graphing: Rigorous Test Case DesignCause-Effect Graphing: Rigorous Test Case Design
Cause-Effect Graphing: Rigorous Test Case Design
 
Software testing quiz questions and answers
Software testing quiz questions and answersSoftware testing quiz questions and answers
Software testing quiz questions and answers
 
Reengineering framework for open source software using decision tree approach
Reengineering framework for open source software using decision tree approachReengineering framework for open source software using decision tree approach
Reengineering framework for open source software using decision tree approach
 
Maestro_Abstract
Maestro_AbstractMaestro_Abstract
Maestro_Abstract
 
Unit1
Unit1Unit1
Unit1
 
501 183-191
501 183-191501 183-191
501 183-191
 
A WHITE BOX TESTING TECHNIQUE IN SOFTWARE TESTING : BASIS PATH TESTING
A WHITE BOX TESTING TECHNIQUE IN SOFTWARE TESTING : BASIS PATH TESTINGA WHITE BOX TESTING TECHNIQUE IN SOFTWARE TESTING : BASIS PATH TESTING
A WHITE BOX TESTING TECHNIQUE IN SOFTWARE TESTING : BASIS PATH TESTING
 

Similar to Software reliability prediction

Software testing effort estimation with cobb douglas function- a practical ap...
Software testing effort estimation with cobb douglas function- a practical ap...Software testing effort estimation with cobb douglas function- a practical ap...
Software testing effort estimation with cobb douglas function- a practical ap...
eSAT Journals
 
Optimal Selection of Software Reliability Growth Model-A Study
Optimal Selection of Software Reliability Growth Model-A StudyOptimal Selection of Software Reliability Growth Model-A Study
Optimal Selection of Software Reliability Growth Model-A Study
IJEEE
 
A Review On Software Reliability.
A Review On Software Reliability.A Review On Software Reliability.
A Review On Software Reliability.
Kelly Taylor
 
IJIRS_Improvement of Quality Sigma Level of Copper Terminal at Vertical Machi...
IJIRS_Improvement of Quality Sigma Level of Copper Terminal at Vertical Machi...IJIRS_Improvement of Quality Sigma Level of Copper Terminal at Vertical Machi...
IJIRS_Improvement of Quality Sigma Level of Copper Terminal at Vertical Machi...
Government engineering College- Banswara,Rajasthan
 
A value added predictive defect type distribution model
A value added predictive defect type distribution modelA value added predictive defect type distribution model
A value added predictive defect type distribution model
UmeshchandraYadav5
 
Software reliability growth model
Software reliability growth modelSoftware reliability growth model
Software reliability growth model
Himanshu
 
O0181397100
O0181397100O0181397100
O0181397100
IOSR Journals
 
Deployment of Debug and Trace for features in RISC-V Core
Deployment of Debug and Trace for features in RISC-V CoreDeployment of Debug and Trace for features in RISC-V Core
Deployment of Debug and Trace for features in RISC-V Core
IRJET Journal
 
Final Exam Solutions Fall02
Final Exam Solutions Fall02Final Exam Solutions Fall02
Final Exam Solutions Fall02
Radu_Negulescu
 
Parameter Estimation of Software Reliability Growth Models Using Simulated An...
Parameter Estimation of Software Reliability Growth Models Using Simulated An...Parameter Estimation of Software Reliability Growth Models Using Simulated An...
Parameter Estimation of Software Reliability Growth Models Using Simulated An...
Editor IJCATR
 
A Novel Approach to Derive the Average-Case Behavior of Distributed Embedded ...
A Novel Approach to Derive the Average-Case Behavior of Distributed Embedded ...A Novel Approach to Derive the Average-Case Behavior of Distributed Embedded ...
A Novel Approach to Derive the Average-Case Behavior of Distributed Embedded ...
ijccmsjournal
 
IRJET - Comparative Study of Flight Delay Prediction using Back Propagati...
IRJET -  	  Comparative Study of Flight Delay Prediction using Back Propagati...IRJET -  	  Comparative Study of Flight Delay Prediction using Back Propagati...
IRJET - Comparative Study of Flight Delay Prediction using Back Propagati...
IRJET Journal
 
IRJET- Analysis of Software Cost Estimation Techniques
IRJET- Analysis of Software Cost Estimation TechniquesIRJET- Analysis of Software Cost Estimation Techniques
IRJET- Analysis of Software Cost Estimation Techniques
IRJET Journal
 
Thesis Final Report
Thesis Final ReportThesis Final Report
Thesis Final Report
Sadia Sharmin
 
V2I6_IJERTV2IS60721
V2I6_IJERTV2IS60721V2I6_IJERTV2IS60721
V2I6_IJERTV2IS60721
Ridhika Sharma
 
ESTIMATING HANDLING TIME OF SOFTWARE DEFECTS
ESTIMATING HANDLING TIME OF SOFTWARE DEFECTSESTIMATING HANDLING TIME OF SOFTWARE DEFECTS
ESTIMATING HANDLING TIME OF SOFTWARE DEFECTS
csandit
 
Visualizing and Forecasting Stocks Using Machine Learning
Visualizing and Forecasting Stocks Using Machine LearningVisualizing and Forecasting Stocks Using Machine Learning
Visualizing and Forecasting Stocks Using Machine Learning
IRJET Journal
 
IMPROVEMENT OF QUALITY SIGMA LEVEL OF COPPER TERMINAL AT VERTICAL MACHINING C...
IMPROVEMENT OF QUALITY SIGMA LEVEL OF COPPER TERMINAL AT VERTICAL MACHINING C...IMPROVEMENT OF QUALITY SIGMA LEVEL OF COPPER TERMINAL AT VERTICAL MACHINING C...
IMPROVEMENT OF QUALITY SIGMA LEVEL OF COPPER TERMINAL AT VERTICAL MACHINING C...
Government engineering College- Banswara,Rajasthan
 
A Tale of Experiments on Bug Prediction
A Tale of Experiments on Bug PredictionA Tale of Experiments on Bug Prediction
A Tale of Experiments on Bug Prediction
Martin Pinzger
 
Application of theorem proving for safety-critical vehicle software
Application of theorem proving for safety-critical vehicle softwareApplication of theorem proving for safety-critical vehicle software
Application of theorem proving for safety-critical vehicle software
AdaCore
 

Similar to Software reliability prediction (20)

Software testing effort estimation with cobb douglas function- a practical ap...
Software testing effort estimation with cobb douglas function- a practical ap...Software testing effort estimation with cobb douglas function- a practical ap...
Software testing effort estimation with cobb douglas function- a practical ap...
 
Optimal Selection of Software Reliability Growth Model-A Study
Optimal Selection of Software Reliability Growth Model-A StudyOptimal Selection of Software Reliability Growth Model-A Study
Optimal Selection of Software Reliability Growth Model-A Study
 
A Review On Software Reliability.
A Review On Software Reliability.A Review On Software Reliability.
A Review On Software Reliability.
 
IJIRS_Improvement of Quality Sigma Level of Copper Terminal at Vertical Machi...
IJIRS_Improvement of Quality Sigma Level of Copper Terminal at Vertical Machi...IJIRS_Improvement of Quality Sigma Level of Copper Terminal at Vertical Machi...
IJIRS_Improvement of Quality Sigma Level of Copper Terminal at Vertical Machi...
 
A value added predictive defect type distribution model
A value added predictive defect type distribution modelA value added predictive defect type distribution model
A value added predictive defect type distribution model
 
Software reliability growth model
Software reliability growth modelSoftware reliability growth model
Software reliability growth model
 
O0181397100
O0181397100O0181397100
O0181397100
 
Deployment of Debug and Trace for features in RISC-V Core
Deployment of Debug and Trace for features in RISC-V CoreDeployment of Debug and Trace for features in RISC-V Core
Deployment of Debug and Trace for features in RISC-V Core
 
Final Exam Solutions Fall02
Final Exam Solutions Fall02Final Exam Solutions Fall02
Final Exam Solutions Fall02
 
Parameter Estimation of Software Reliability Growth Models Using Simulated An...
Parameter Estimation of Software Reliability Growth Models Using Simulated An...Parameter Estimation of Software Reliability Growth Models Using Simulated An...
Parameter Estimation of Software Reliability Growth Models Using Simulated An...
 
A Novel Approach to Derive the Average-Case Behavior of Distributed Embedded ...
A Novel Approach to Derive the Average-Case Behavior of Distributed Embedded ...A Novel Approach to Derive the Average-Case Behavior of Distributed Embedded ...
A Novel Approach to Derive the Average-Case Behavior of Distributed Embedded ...
 
IRJET - Comparative Study of Flight Delay Prediction using Back Propagati...
IRJET -  	  Comparative Study of Flight Delay Prediction using Back Propagati...IRJET -  	  Comparative Study of Flight Delay Prediction using Back Propagati...
IRJET - Comparative Study of Flight Delay Prediction using Back Propagati...
 
IRJET- Analysis of Software Cost Estimation Techniques
IRJET- Analysis of Software Cost Estimation TechniquesIRJET- Analysis of Software Cost Estimation Techniques
IRJET- Analysis of Software Cost Estimation Techniques
 
Thesis Final Report
Thesis Final ReportThesis Final Report
Thesis Final Report
 
V2I6_IJERTV2IS60721
V2I6_IJERTV2IS60721V2I6_IJERTV2IS60721
V2I6_IJERTV2IS60721
 
ESTIMATING HANDLING TIME OF SOFTWARE DEFECTS
ESTIMATING HANDLING TIME OF SOFTWARE DEFECTSESTIMATING HANDLING TIME OF SOFTWARE DEFECTS
ESTIMATING HANDLING TIME OF SOFTWARE DEFECTS
 
Visualizing and Forecasting Stocks Using Machine Learning
Visualizing and Forecasting Stocks Using Machine LearningVisualizing and Forecasting Stocks Using Machine Learning
Visualizing and Forecasting Stocks Using Machine Learning
 
IMPROVEMENT OF QUALITY SIGMA LEVEL OF COPPER TERMINAL AT VERTICAL MACHINING C...
IMPROVEMENT OF QUALITY SIGMA LEVEL OF COPPER TERMINAL AT VERTICAL MACHINING C...IMPROVEMENT OF QUALITY SIGMA LEVEL OF COPPER TERMINAL AT VERTICAL MACHINING C...
IMPROVEMENT OF QUALITY SIGMA LEVEL OF COPPER TERMINAL AT VERTICAL MACHINING C...
 
A Tale of Experiments on Bug Prediction
A Tale of Experiments on Bug PredictionA Tale of Experiments on Bug Prediction
A Tale of Experiments on Bug Prediction
 
Application of theorem proving for safety-critical vehicle software
Application of theorem proving for safety-critical vehicle softwareApplication of theorem proving for safety-critical vehicle software
Application of theorem proving for safety-critical vehicle software
 

Software reliability prediction

  • 1. Software Reliability Prediction Shishir Kumar Saha Mirza Mohymen Dept. of Software Engineering, Dept. of Software Engineering, Blekinge Institute of Technology, Blekinge Institute of Technology, Karlskrona, Sweden. Karlskrona, Sweden. shsd10@student.bth.se mimo10@student.bth.se ABSTRACT Model. Using SMERFS3 tools the first release data until 26 th In this paper, we describe the procedure of analytical analysis weeks are used to estimate the total number of faults. From to choose suitable software reliability growth model to forecast SMERFS3 the five models are selected and after calculation, the number of defects in second release, base on the failure the Yamada’s S-Shaped Model’s result was closer to total data of first release and a few failure data of second release. number of faults value of first release. The tool gives total no We recommend an appropriate time for second release to put of faults 195 (approximately) by Yamada’s S-Shaped Model’s, into operation, evaluating the study result of fault data. which is very close to 198 (actual number of total faults). So for second release the Yamada’s S-Shaped Model is used to Keywords calculate the total number of faults. After calculate the total Software reliability prediction, Yamada’s S-Shaped Model, estimated faults the NHPPM are used for further calculation. Goel-Okumoto Non-homogeneous Poisson Process Model Detection of defects depends on time interval and this model is suitable for failure counts calculation and estimation [2]. Also 1. INTRODUCTION from data, it shows that the fault is not overlapping in different Unpredicted errors or defects of software are not only time interval. So NHPP model is selected. hampering the business value but also escalating the estimated development time and cost. So, Software reliability growth 3. RELEASE DATA ANALYSIS model is crucial to predict the future behavior of the software [1]. 3.1 Goel-Okumoto Model Goel-Okumoto (G-O) model is a quantitative software In the report, we have presented our motivation, considering reliability appraisal model. It depends on Non-homogeneous several alternatives & properties of model and nature of fault Poisson Process (NHPP) to predict the software release date. data, to choose the reliability growth model. Later, we According to the G-O model, if a is the number of faults in the calculated the number of predicted defects in second release software and b is the testing efficiency or the reliability growth applying the reliability model on historical fault data and rate, then the mean value or cumulative faults and failure recommended the time for second release to put into operation intensity at a given time t can be calculated using the following considering the uncertainties of method or release date. formulas [3] – Mean Value, (1) 2. RELIABILITY MODEL SELECTION Failure Intensity, (2) In the assignment, the 50 weeks fault data of first release and the 18 weeks fault data of second release are provided. There Normalizing the likelihood function (1 & 2) we get the are different categories of analytical models used for software following equation [3], reliability measurement. Those are Times Between Failures Models, Fault Seeding Models, Input Domain Based Models, Failure Count Models. Times Between Failures Models is based on time interval of ... (3) failures. But according to assignment description and data, time interval of failure is not given so this model is not chosen. In Fault Seeding Models defect is seeded but here no seeding 3.2 Fault Detection Rate Calculation & is occurred, so this category of models are not applicable here. Failure Data Processing In Input Domain Based Models the input data is needed to In order to find out the predicted failure data of second release, build test cases and observed the failure to measure the we need to calculate the value of a (estimated number of reliability. But from the assignment there is no input domain faults) and b (reliability growth rate or constant quality of and input data so the model is not suitable here. The Fault testing). We can get the value of reliability growth rate from Count Models is based on failures counts in a specified time the provided first release fault data as the value of b is constant interval. From the assignment it’s shown that the interval is for each release, using the equation (3). As the equation (3) is one week so homogeneous and there are no overlapping faults. non-linear, we solved it numerically using Newton-Raphson So Fault Count Models should have chosen. method. The value of a = 199.48 and b = 0.098076 [3] from first release fault data. By the help of the value of reliability Again we know that in Fault Count Models that have several growth rate b, we can calculate the predicted second release sub models. As for example: Goel-Okumoto Non-homogeneous defects per week (19th week to 50th week), what is shown in Poisson Process Model, Goel Generalized Non-homogeneous table 1. Poisson Process Model, Yamada’s S-Shaped Model, Brook's For example, μ(18) = a*(1-exp(-0.098076*18)) [Here, μ(18) = and Motley's Binomial Model, Brook's and Motley's Poisson 213]; a = 256.9515
  • 2. Then, μ(19) = 256.9515*(1-exp(-0.098076*19)); μ(19) = But we would recommend 41th week or later for release since 217.1069; So, in 19th week the number of predicted defects are the operational cost become lower than test cost at 41th week. (217.1069 – 213 = 4.1069) According to the provided data, the test costs 2 units, defects in test costs 1 unit and defects in operation costs 5 units per week. So, table1 shows the total defects and corresponding cost of second release up to 50th week. Table1. Defects and Cost Estimation of Second Release Value of Defects Week Defects Test Operational in s Detection Cost Cost Release 2 μ(t) 1 3 3 5 15 Figure1. Cost Analysis 2 3 6 5 15 3 38 44 40 190 -- -- -- -- -- 18 3 213 5 15 19 4.1069 217.1069 6.1069 20.5345 -- -- -- -- -- 33 1.0401 246.8611 3.0401 5.2003 0.9429 247.8040 2.9429 4.7143 Figure2. Actual Failure and Predicted Failure 34 35 0.8548 248.6587 2.8548 4.2738 5. UNCERTAINTIES 0.7749 249.4336 2.7749 3.8745 The model used for predicting the reliability of fault data is not 36 architecture based reliability model. So, it is not possible to get 37 0.7025 250.1361 2.7025 3.5124 idea about the software architectural style whether it is complex or simple. Also, we had no idea about the nature of 38 0.6368 250.7730 2.6368 3.1842 software e.g. size, requirements, usage profile, application’s 39 0.5773 251.3503 2.5773 2.8867 termination behavior, organizational structure, type of testing tools and techniques, available & expert resources, 40 0.5234 251.8737 2.5234 2.6169 reproducibility of bugs, adequate time for re-testing, change 0.4745 252.3482 2.4745 2.3724 requests, product baseline, development life cycle, reliability 41 model used for first release, maximum available budget and 42 0.4301 252.7783 2.4301 2.1507 schedule. Those factors are very crucial to ensure for a release. Release time may be effected by the factors and create more 43 0.3900 253.1683 2.3900 1.9498 defects than expected. Moreover, it is not possible to pledge, 44 0.3535 253.5218 2.3535 1.7676 the correction of older bugs would not lead to new bugs in other units. 45 0.3205 253.8423 2.3205 1.6024 -- -- -- -- -- 6. CONCLUSION Software reliability is very much important for successful 49 0.2165 254.8514 2.2165 1.0823 software. We used software reliability growth model (NHPP) 50 0.1962 255.0477 2.1962 0.9812 to predicate the faults; analyze the cost and effort for estimating proper release date. Here the given data is faults counts and it’s related to homogeneous time interval, so we Calculating all steps according to the selected model, we found used NHPP model for manual calculation and Yamada’s S- 255 (app.) predictable defects in second release. The result is Shaped Model for precise outcome. very alike with the calculated results of Yamada Model and Goel – Okumoto Model. 7. REFERENCES [1] Misra, P. 1983. Software Reliability Analysis. IBM 4. RECOMMENDATION ABOUT Systems Journal. Lang. Vol. 22, No. 3 (1983), 262-270. SECOND RELEASE [2] Goel, A. L., Software Reliability Models: Assumptions, According to the data analysis of table 1 and visual figure of Limitations, and Applicability, IEEE Transactions on cost analysis, we can say that the test cost is quite steady after Software Engineering, 11(12), pp. 1411-1423, Dec. 1985. 38th week, because it decreases the cost value difference [3] Xie, M.; Hong, G.Y.; Wohlin, C., 1999, Software between two consecutive weeks in later releases and no major reliability prediction incorporating information from a spike or ups and downs is observed. So, we can put the second similar project. Journal of Systems and Software Volume: release into operation in any week among 38th to 50th weeks. 49, Pages: 43-48.