Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

The Top Ten things that have been proven to effect software reliability

583 views

Published on

There are many myths about what causes reliable or unreliable software. The proven facts are shown in this presentation.

Published in: Engineering
  • Be the first to comment

The Top Ten things that have been proven to effect software reliability

  1. 1. Softrel, LLC 20 Towne Drive #130 Bluffton, SC 29910 http://www.softrel.com 321-514-4659 January 22, 2015 amneufelder@softrel.com Copyright © SoftRel, LLC 2011. This presentation may not be copied in part or in whole without written permission from Ann Marie Neufelder.
  2. 2. About Ann Marie Neufelder  Chairperson of the IEEE 1633 Recommended Practices for Software Reliability  Since 1983 has been a software engineer or manager for DoD and commercial software applications  Co-Authored first DoD Guidebook on software reliability  Developed NASAs webinar on Software FMEA and FTA  Has trained every NASA center and every major defense contractor on software reliability  Has patent pending for model to predict software defect density  Has conducted software reliability predictions for numerous military and commercial vehicles and aircraft, space systems, medical devices and equipment, semiconductors, commercial electronics. 2 Copyright © SoftRel, LLC 2011. This presentation may not be copied in part or in whole without written permission from Ann Marie Neufelder.
  3. 3. There are 3 basic things that determine the software MTTF and MTTCF  This presentation will focus on the defect and defect density reduction of defects that escape development and testing Factor Sensitivity Comments Fielded defect density/defects Cutting this in half -> doubles MTBF. Reducing defects requires elimination of development practices that aren’t effective as well as embracing those that are Effective code size Cutting effective size in half -> doubles MTBF. • COTS and reuse can have big impact • Error in size prediction has direct impact on error in reliability prediction Reliability growth– how many hours real end user operate tank per month after deployment Non-linear relationship. More of this after delivery means MTTF at end of growth period is better but MTTF upon delivery is less because more defects are found earlier. Copyright © SoftRel, LLC 2011. This presentation may not be copied in part or in whole without written permission from Ann Marie Neufelder.
  4. 4. If you ask a software engineer to rank the top 10 factors associated with unreliable software – this is what they might say… Copyright © SoftRel, LLC 2011. This presentation may not be copied in part or in whole without written permission from Ann Marie Neufelder. Ranking Factors Where this factor actually ranks 1 Not enough calendar time to finish 457 - because usually late projects are late because they started late and not because of insufficient time 2 Too much noise 352 3 Insufficient design tools 126 4 Agile development So far, not a single project in our DB used this completely and consistently from start to finish. 5 Existing code is too difficult to change 146 6 Number of years of experience with a particular language 400 – What matters is the industry experience 7 Our software is inherently more difficult to develop 370 – Everyone thinks this 8 Everybody has poor coding style 423 – While code with good style may be less error prone, that doesn’t mean its defect free 9 Object oriented design and code 395 – While OO code may be more cohesive, that doesn’t mean its defect free 10 If they would just leave me alone I could write great code Our data shows that the reverse is true
  5. 5. If you ask a software process engineer to rank the top 10 factors associated with unreliable software – this is what they might say… Copyright © SoftRel, LLC 2011. This presentation may not be copied in part or in whole without written permission from Ann Marie Neufelder. Ranking Factors Where this factor actually ranks 1 Capability Maturity 417 - Organizations with low CMM can and have developed reliable software. Defect density reduction in our DB plateaued at level 3. 2 Process improvement activities 8 – The right activities tailored to the process can avoid a failed project but not necessarily result in a successful project 3 Metrics 54 – Not all metrics are relevant for reducing defects. Too many metrics or poorly timed metrics won’t reduce defects either. 4 Code reviews 366 - Because the criteria for the reviews is often missing or not relevant to reducing defects 5 Independent SQA audits 359 – Probability because the audits focus on product and often miss technique 6 Popular metrics such as complexity 430 – Fastest way tor reduce complexity is to reduce exception handling which is necessary for reliability. 7 Peer reviews #368 - Because peer reviews are often lacking a clear agenda and because peers don’t necessarily understand the requirements 8 Traceability of requirements 61 – The problem is what’s NOT in the requirements. Requirements almost never discuss negative or unexpected behavior. 9 Independent test organization 295 – Organizations with this are less motivated to do developer testing 10 High ratio of testers to software engineers 380 – Those that have this are often not doing developer testing
  6. 6. This is the top 10 list based on hard facts and data 1. Avoid Big Blobs – “Code a little – Test a little”. Avoid big and long releases, avoid big teams working on same code, avoid reinvention of the wheel. Planning ahead and with daily or weekly detail. Micromanage the development progress. 2. Mandatory developer white box testing at module, class and integration level 3. Techniques that make it easier to visualize the requirements, design, code, test or defects 4. Identifying, designing, coding and testing what the software should NOT do 5. Understand the end user. Employ software engineers with DOMAIN experience. Involve customers in requirements, Prototyping, etc. 6. Not skipping requirements, design, unit test, test, change control, etc. even for small releases. 7. Defect reduction techniques – Formal product reviews, SFMEAs, root cause analysis. 8. Process improvement activities – tailored to the needs of the project 9. Maintaining version and source control, defect tracking, prioritizing changes. Avoiding undocumented changes to requirements, design or code. Verifying changes to code don’t have an unexpected impact. 10. Techniques for how to test the software better instead of longer Copyright © SoftRel, LLC 2011. This presentation may not be copied in part or in whole without written permission from Ann Marie Neufelder.
  7. 7. How the “Top Ten” list was developed  Since 1993 Ann Marie Neufelder has benchmarked 679 development factors versus actual defect data  156 factors were either employed by everyone or employed by no one in the database.  The benchmarking was conducted on the remaining 523 factors.  75 complete sets and 74 incomplete sets of actual fielded defect data  See backup slides for a summary of the projects in this database  Benchmarking results yielded  Ranked list of each factor by sensitivity to fielded defect density  A model to predict defect density before the code is even written  Refer to white paper “The Cold Hard Truth about Reliable Software, Revision 6e” Copyright © SoftRel, LLC 2011. This presentation may not be copied in part or in whole without written permission from Ann Marie Neufelder.
  8. 8. These are the actual fielded defect densities for each of the projects in the database. 0.0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0 0 10 20 30 40 50 60 Actual fielded defect density of each project in database If you can predict where your software release is with respect to those in our database, you can predict the reliability Copyright © SoftRel, LLC 2011. This presentation may not be copied in part or in whole without written permission from Ann Marie Neufelder. These software projects were distressed These software projects were successful
  9. 9. The 523 factors and the 4 P’s and a T Factor category Number of factors in this category Examples of factors in this category Product 50 Size, complexity, OO design, whether the requirements are consistent, code that is old and fragile, etc. Product risks 12 Risks imposed by end users, government regulations, customers, product maturity, etc. People 38 Turnover, geographical location, amount of noise in work area, number of years of experience in the applicable industry, number of software people, ratio of software developers to testers, etc. Process 121 Procedures, compliance, exit criteria, standards, etc. Technique 302 The specific methods, approaches and tools that are used to develop the software. Example: Using a SFMEA to help identify the exceptions that should be designed and coded. Now let’s see which development activities have been covered. Copyright © SoftRel, LLC 2011. This presentation may not be copied in part or in whole without written permission from Ann Marie Neufelder.
  10. 10. The 523 factors by development phases/activities Activity associated with factor #Factors Scheduling and personnel assignments 32 Project execution – making sure that work gets done on time and with desired functionality 24 Software development planning 10 Requirements analysis 42 Architectural design 3 Design 32 Detailed design 22 Firmware design* 1 Graphical User Interface design* 2 Database design* 1 Network design* 1 Implementation – coding 54 Corrective action – correcting defects 11 Configuration Management (CM), source and version control 27 Unit testing – testing from a developers perspective 48 Systems testing – testing from a black box perspective 75 Regression testing – retesting after some changes have been made to the baseline 4 Defect tracking 17 Process improvement 24 Reliability engineering 18 Software Quality Assurance 25 No activity – these are related to operational profile and inherent risks 33 Copyright © SoftRel, LLC 2011. This presentation may not be copied in part or in whole without written permission from Ann Marie Neufelder.
  11. 11. The factors associated with increased defect density 1. Using short term contractors to write code that requires domain expertise and is sensitive to your company 2. Reinventing the wheel – failing to buy off the shelf when you can 3. Large projects spanning over many years with many people 4. “Throw over the wall” testing approach Now that we’ve seen what causes high defect density, let’s see what causes a failed project… Copyright © SoftRel, LLC 2011. This presentation may not be copied in part or in whole without written permission from Ann Marie Neufelder.
  12. 12. All failed projects had these things in common  They started the project late  They had more than 3 things that required a learning curve  New system/target hardware  New tools or environment  New processes  New product (version 1)  New software people  They failed to mitigate known risks early in the project What’s not on these lists is as important as what IS on these lists…. Copyright © SoftRel, LLC 2011. This presentation may not be copied in part or in whole without written permission from Ann Marie Neufelder.
  13. 13. The factors that didn’t correlate one way or the other to reduced defect density  Metrics such as complexity, depth of nesting, etc.  Interruptions to software engineers (some interruptions are good while others are not)  Having more than 40% of staff doing testing full time (usually indicates poor developer testing)  CMMi levels > 3  Coding standards that don’t have criteria that are actually related to defects  Metrics that aren’t useful for either progress reporting, scheduling or defect removal  Peer walkthrus (when the peers don’t have domain or industry experience)  Superficial test cases  Number of years of experience with a particular language Copyright © SoftRel, LLC 2011. This presentation may not be copied in part or in whole without written permission from Ann Marie Neufelder.
  14. 14. Conclusions  The benchmarking results were used to identify the factors that result in fewer or more defects  The ranked list was used to develop a model to predict defect density before the code is written  This model is available in ○ The Software Reliability Toolkit ○ The Software Reliability Toolkit Training Class ○ The Frestimate software ○ The Software Reliability Assessment services  Traditional software reliability models are used late in testing when there is little opportunity to improve the software without delaying the schedule Copyright © SoftRel, LLC 2011. This presentation may not be copied in part or in whole without written permission from Ann Marie Neufelder.
  15. 15. Backup slides Copyright © SoftRel, LLC 2011. This presentation may not be copied in part or in whole without written permission from Ann Marie Neufelder.
  16. 16. Software reliability defined  Probability of success of the software over some specified mission time  Term commonly used to describe an entire collection of software metrics.  Also defined as a function of  Inherent defects ○ Introduced during requirements translation, design, code, corrective action, integration, and interface definition with other software and hardware  Operational profile ○ Duty cycle ○ Spectrum of end users ○ Number of install sites/end users ○ Product maturity These things can be predicted before the code is written Copyright © SoftRel, LLC 2011. This presentation may not be copied in part or in whole without written permission from Ann Marie Neufelder.
  17. 17. Related Terms  Error  Related to human mistakes made while developing the software  Ex: Human forgets that b may approach 0 in algorithm c = a/b  Fault or defect  Related to the design or code  Ex: This code is implemented without exception handling “c = a/b;”  Defect rate is from developer’s perspective  Defects measured/predicted during testing or operation  Defect density = defects/normalized size  Failure  An event  Ex: During execution the conditions are so that the value of b approaches 0 and the software crashes or hangs  Failure rate is from system or end user’s perspective  KSLOC  1000 source lines of code – common measure of software size Copyright © SoftRel, LLC 2011. This presentation may not be copied in part or in whole without written permission from Ann Marie Neufelder.
  18. 18. Who’s doing software reliability predictions?  Space systems  Missiles defense systems  Naval craft  Commercial ground vehicles  Military ground vehicles  Inertial Navigation and GPS  Command and Control and Communication  Electronic Warfare  General aviation  Medical devices  Healthcare/EMR software  Major appliances  Commercial electronics  Semiconductor fabrication equipment  HVAC  Energy 18 Copyright © SoftRel, LLC 2011. This presentation may not be copied in part or in whole without written permission from Ann Marie Neufelder.
  19. 19. About the projects in this database… 29% 5% 7% 10% 1% 5% 5% 38% Defense Space Medical Commercial electronics Commercial transportation Commercial software Energy Semiconductor fabrication Copyright © SoftRel, LLC 2011. This presentation may not be copied in part or in whole without written permission from Ann Marie Neufelder.
  20. 20. The benchmarking revealed 7 percentile groups in which the project defect densities are clustered 20 •Percentile group predictions… •Pertain to a specific product release •Based on the number of risks and strengths •Can only change if or when risks or strengths change •Some risks/strengths are temporary; others can’t be changed at all •Can transition in the wrong direction on same product if •New risks/obstacles added •Strengths are abandoned •World class does not mean defect free. It simply means better than the defect density ranges in database. Fewer fielded defects 97% Failed 10% Very good 75% Fair 50% Average 25% Good More risks than strengths More strengths than risksStrengths and risks Offset each other More fielded defects 90% Poor 3% World Class Copyright © SoftRel, LLC 2011. This presentation may not be copied in part or in whole without written permission from Ann Marie Neufelder.

×