Quality & Reliability in Software Engineering

  • 1,456 views
Uploaded on

 

  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Be the first to comment
    Be the first to like this
No Downloads

Views

Total Views
1,456
On Slideshare
0
From Embeds
0
Number of Embeds
0

Actions

Shares
Downloads
50
Comments
0
Likes
0

Embeds 0

No embeds

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
    No notes for slide
  • Why the picture?
  • Requirements vary;Specifically NFR’s are never captured.Take the mobile example around the board;Ask out what is quality & reliability from the audience;
  • Involve testing early
  • http://www.cs.tau.ac.il/~nachumd/horror.html
  • http://www.cs.tau.ac.il/~nachumd/horror.html
  • http://www.itl.nist.gov/div898/handbook/apr/section1/apr111.htmhttp://www.relia-easy.com/UK_INTRO-Quality%20vs%20reliability.html
  • http://www.itl.nist.gov/div898/handbook/apr/section1/apr111.htmhttp://www.relia-easy.com/UK_INTRO-Quality%20vs%20reliability.html
  • http://www.itl.nist.gov/div898/handbook/apr/section1/apr111.htmhttp://www.relia-easy.com/UK_INTRO-Quality%20vs%20reliability.html

Transcript

  • 1. Engineering Quality & Reliability SivaramaSundar.D 29th Nov 2012
  • 2. ExpectationsHow quality can be achieved? - DoneHow we maintain quality? - DoneQuality measurements; - Overview provided;How reliability can be achieved? - Done
  • 3. Software QualityIn simple terms “Quality software is reasonably bug-free, delivered on time and within budget, meets requirements and/or expectations, and is maintainable” More formally, Software quality measures how wellsoftware is designed (quality of design) and how well thesoftware conforms to that design (i.e., Implementation - quality of conformance / quality assurance)
  • 4. So, we’d have quality issues …1. if customer’s expectations are not met by the end result2. if there is a lack of conformance to requirement3. if the development criteria towards specified standards are not met4. if implicit requirements are not captured and addressed (ex: - a change in the physician name in configuration should reflect in the case selection & acquisition screen - A Remote service connection has to be secure - A long text message or caption should have a “…” & a tooltip on mouse hover - Pressing the Escape key or the X button should close a pop-up and cancel the changes made)
  • 5. quality ofHence, bothdesign and qualityassurance needs to beensured throughout the SDLC,across all the phasesThe V-Model product developmentprocess (and agile), ensures betterquality assurance by preparing fortesting early in each stage of SDLC.
  • 6. The V Model Phase Measureinvolves the testers Requirements SSRS, URS cover Quality Attributes & implicitearly in the project requirements (Performance, Security, Safety,lifecycle and thus Regulatory)providing avenues Test Strategy & Plan, Adoption of specific Test Design Techniques based on Risk Matrix for each unitto correct before Critical to Quality Use casescritical decisions Requirement Workshopsare made. Reviews or Walkthroughs, Checklists Traceability Matrix for requirement conformance Prototypes Design Design Guidelines, Standards Design Workshops DAR Reviews & Walkthroughs, Checklists Implementation Unit testing, Mocks & Stubs Continuous Integration Reviews & Walkthroughs, Checklists Testing Functional Testing Integration Testing Smoke Testing
  • 7. Cost of Non Quality!
  • 8. Reliability Software reliability is defined as “the probability of failure-free software operation for a specified period of time in a specified environment”.Software reliability is based on the three primary concepts: fault, Person (developer) makeserror, and failure (Bug in a program is a fault. Possible incorrect zero to manyvalues caused by this bug is an error. Possible crash of theoperating system is a failure.) Mistakes Can be attributedA fault is the result of a mistake made in the development of the Leads to zero or many to one or manysystem. Faults are dormant but they can become active due tosome revealing mechanisms. (ex: a check for free disk space Faultsthreshold before acquisition start, other ex: null ref., Can be attributeduninitialized variable leading to errors) to one or manyAn error is the manifestation of what is wrong in the running Leads to zero or manysystem. Often errors lead to new errors (propagation), which Errorseventually may lead to system failure. (ex: fault leading to full Can be attributeddisk space usage, without a warning or validation) to one or many Leads to zero or manyAn error can become a failure when it is not corrected or Failuresmasked, i.e., when error become observable by the system’s Can be attributed Leads to zero or manyCustomer complaints to one or manyuser it become a failure (that is, failure is observable by the enduser and error is not). Field Calls(ex: full disk space leads to acquisition failure or data loss orsystem crash)
  • 9. Reliability … explained• Specification mistakes – incorrect algorithms, incorrectly specified requirements (timing,power, environmental)• Implementation mistakes – poor design, software coding mistakes• Component defects – manufacturing imperfections, random device defects, componentswear-outs• External factors – radiation, lightning, operator mistakes
  • 10. Reliability – Key AreasArea Description Applicable PhaseFault Prevention focuses on avoidance of faults in SW products Requirements, DesignFault Detection focuses on revealing reliability problems Requirements, Design, Implementation, TestingFault Tolerance ensures that system is working properly in case of faults Design, ImplementationFault Forecasting focuses on prediction of the future system reliability Deployment, Support & Service
  • 11. Reliability in practice… Phase Measure Requirements Requirements Reliability Requirements, Safety & Risk Management Requirements Reliability requirements Operational profile Critical to Quality Use cases (MTBF, MTBC, MTBE, etc.) (which functionality is critical) Requirement Workshops Reviews, Walkthroughs, Checklists Design Design Guidelines, Standards Architecture/Design for reliability (principles, practices, and patterns) Emphasis for threading, execution architecture Whiteboard designs Design Workshops FMEA Measuring and testing for reliability Graceful Degradation (Measuring: MTBF, MTBC, ..., tools DARTesting: load/stress/capacity testing, reliability growth testing, tools) Reviews, Walkthroughs Is reliability req. Implementation Follow Best Practices & Coding Standards fulfilled? NO Error Handling TICS YES Static Analysis, Code Coverage, Release Memory Profiling product Reviews, Reports attached to CQ activity Improved Logging POST, BIST Testing Performance Testing Smoke Testing based on CTQ’s & Operational Profiles Regression Testing
  • 12. Reliability in practice…Reliability Parameters Targets & Measurement CriteriaCall Rate Target: < 1.5 calls per system per year Actions: Implement I/O enhancements Recommendations: Start study to analyze how to decrease the call rate to < 1.0Failure Rate (Failure Rate gives an Target: # of failures reported should be less than 10 per siteindication of the number of non- Have explicit robustness designed into the productrecoverable failures in the field. ) Actions: Execute FMEAsMTBF Mean time between failure would be 200 days or 1000 studiesMTTR Mean time to repair should not exceed 2 daysUsage of PII Private Interfaces # of private interfaces used – should be 0 ideally, and no increase in usage of new private interfacesTICS Target: 0 violations for level 1..6, No increase of level 7..10 violations Actions: Monitor and actCode Coverage (Method, Statement) > 80% Statement coverage & 100% Method CoverageCode reviews 100 %
  • 13. Reliability – Design FMEAIdentify Critical Functionality & Classify foreffective design towards handling faults Faults to be System/service does not Fault prevention experience faults prevented by system architecture/design design of class I Class I System/service keeps Classify functionality Faults Operating when System High level Identify faults Faults to be handled Fault tolerant according to Classify faults Class II encountersSpecification importance/criticality design (e.g., FMEA) (severity, by the system design a fault of class II frequency) Class III Faults not to be System/service crashes System controlled-fail when encounters handled by the system design a fault of class III
  • 14. Reliability testingSteps Possible inputs/tools Hints1. Derive test cases from Application specialist Operational profiles may differ perOperational profiles Logging from production deployment… Use cases Test cases derived from operational profiles differ from stress testing.2. Run tests Manual testing on system Test cases should be as repeatable as Mock, stubs, drivers possible and executed under same simulators conditions. QTP Test Automation Framework3. Gather data System logging In case of automation, keep in mind time Test logging compression factor. Failure definition should be explicit for the tested system.4. Plot data and extract failureintensity and failure rate5. Predict reliability at end of Be aware that the predicted reliability willcurrent project phase still very vulnerable to variances
  • 15. Quality vs. ReliabilityQuality is a snapshot at the start of life (Time Zero).All requirements are implemented and as per the design.All user expectations are met.Time zero defects are mistakes that escaped the final test.“Quality is everything until put into operation (0-hours)”Reliability is a motion picture of the day-by-day operation.The additional defects that appear over time are "reliabilitydefects" or reliability fallout.“Reliability is everything happening after 0-hours”