Process Improvement for better
        Software Technical Quality under
             Global Crisis scenario
                       Luis Rodriguez Berzosa –
                       lrodriguez@optimyth.com




Madrid, 4th-7th of June 2012
Software Quality under a Global Crisis Scenario?

• Current situation: Economic crisis
    – Decrease in new SW development projects investment. Higher % of IT
      budget for Maintenance (corrective and perfective).
    – Existing software has to work better and must admit change with less cost.
    – Less resources for software quality.
• Software is more and more ubiquitous
    – Who doesn‟t use a mobile device and mobile apps, or a social network, or
      electronic transaction systems with government agencies or their bank?
    – How many process are NOT supported by software?
    – How many key infrastructure of a country are NOT based on software
      systems?
    – How many organizations could shutdown their information systems and
      not disappear?
• Even though software is one of the most widely used products of
  human civilization, it shows shocking failure rates, more than any other
  human invention mainly due to lack of quality.



Madrid, 4th-7th of June 2012
Recipes to tackle the crisis. Manager‟s view
  Subprime crisis global financial crisis  real economy crisis  recession  ?

                                                        • Well known recipes.
             Size reduction
                                                        • Important: accept the
                                                          situation, analyze the
                                                          causes, accurate
                                                          diagnosis, and propose
                                                          effective measures in short
                                                          term plan.
                                                        • Objective: Stay alive when
                                                          the crisis is over.




Madrid, 4th-7th of June 2012
Software Quality trends (2011)
  1. Quality is not a software development add-on, but an integral part of the
     process from the beginning. It is not an expense: High quality levels are
     related with shorter than average development time and cost.
  2. Software development/maintenance creates business risk that should be
     deleted by independent quality assurance.
  3. Higher use of open source frameworks.
  4. Use of cloud solutions to substitute ‘in-house’ solutions.
  5. Quality = Testing + … not only testing.
      • The # of defects is a quality indicator but not the only one
      • Wider angle approach: verify the compliance with standards and
        normatives, avoid common coding and configuration , detect design anti-
        patterns defects or not compliance with architectural normatives, make
        software easier to maintain, and in general technically measure other
        characteristics (efficiency, portability, usability, reliability…)
       QA InfoTech, 2011 - http://www.prweb.com/releases/2011/01/prweb4959164.htm


Madrid, 4th-7th of June 2012
The gap widens…
• … between current software systems complexity and software quality
  assurance technology “in the broad sense”
    – Test automation has been evolving for 20 years, but hasn‟t been able to
        advance to cope with current software platforms. The cost to maintain the
        automated tests has not diminish, on the contrary1.
    – Static code analysis technology has been around for even longer, but the
        number of languages, frameworks, middleware and architectural models
        have grown exponentially. The static analyzers haven‟t been able to keep
        the pace.
    – The knowledge on software quality metrics, security
        vulnerabilities, techniques to facilitate future maintenance and so
        on, haven‟t changed much in the past 20 years. It is true that the
        acknowledgement that there are quality problems has grown, but that is
        not enough.
    – Organization's software complexity has grown, but not at the same rate as
        the use of tools to generate a software element catalog and the
        dependencies between them.
    (1) http://www.origsoft.com/_assets/client/docs/pdf/whitepapers/throw_away_test_automation.pdf

Madrid, 4th-7th of June 2012
Maintenance: What kind of changes can you
                   afford?




               Fuente: Boehm B., “Software Engineering Economics”, Prentice Hall (1981).

Madrid, 4th-7th of June 2012
www.dilbert.com




Madrid, 4th-7th of June 2012
Analyzing Software Quality issues

                        Possible framework
                        Situation analysis
                        Diagnostics, by quality
                         characteristic



Madrid, 4th-7th of June 2012
Let‟s try to define Software Quality

• Software quality is much more than the absence of known defects (the
  goal of testing)
• Different types classes: functional quality, technical quality, process
  quality, quality in-use, service quality (support), subjective quality (user
  satisfaction), quality against standards (compliancy), legal quality
  (SLA, contracts)
• Key aspects that our definition must include:
    – Ease to accommodate changes (in the business, in the environment, in the
      technology, in the load, in the regulation…)
    – Must be predictable, measurable (in economic terms), accommodate new
      characteristics and environments (virtualization and cloud).
    – It must include the benefit of quality for the business, not only the implicit cost
      of its treatment.



 Madrid, 4th-7th of June 2012
Classic framework: ISO 9126 / 25000 (SQuaRe)




Este marco sigue vigente, pero se cambia el énfasis en otras características.

Madrid, 4th-7th of June 2012
Situation analysis: “soul searching”
• What are the software quality issues?
    –   Software does not fulfill user nor business expectations (functionality).
    –   Bad requirements, not up to date, badly defined (functionality)).
    –   Software has security issues (Functionality)).
    –   Software is not stable. Under specific circumstances, issues arise or the
        system stops being reliable (reliability).
    –   Software is difficult to use and users and less productive (usability).
    –   Users only use a reduce set of functionality and are prompt to errors when
        they try other characteristics (usability).
    –   Software requires frequent hardware updates or response times soar
        under heavy loads (efficiency).
    –   Software changes are costly and there is a high risk of regression errors
        (maintainability).
    –   To test the changes you need a lot of resources (maintainability).
    –   Software has to be maintained for a legacy platform and it is hard to
        integrate with other systems (portability).
    –   Software also has “toxic asset”.


Madrid, 4th-7th of June 2012
Diagnostic: root cause
• We are looking for root causes of the software quality
  problems:
• Functionality problems: Badly designed requirements or
  not up to date. Not reviewed requirements. No user
  acceptance tests. No product security audits. Development
  teams don‟t have software security knowledge…
• Reliability problems: Concurrency defects (race conditions
  or sync failures), transactional control failures, memory
  leaks, wrong error control, failover environments not
  supported, no technical tests to detect these problems…
• Usability problems: Product definition did not consider user
  interface, wrong user interface components used, lack of
  documentation, UI not intuitive enough, complex and/or
  limited configuration, no user testing.

Madrid, 4th-7th of June 2012
Diagnostic (2)
• Efficiency problems: In efficient DB queries, unused available physical
  resources (memory, CPU), unnecessary contention, excessive
  granularity in external interfaces, no performance related requirements,
  no performance tests, development teams don‟t have optimization
  techniques knowledge…
• Maintainability problems: Not well documented code, unnecessary
  complexity or excessive coupling, no public well documented API, the
  initial design is no longer valid due to continuous patches and
  modifications, no regression tests, lack of a development normative to
  assure future maintenance…
• Portability problems: Excessive coupling with the execution
  environment, lack of API to ease integration with other systems, no
  portability requirements, the design doesn't allow to replace
  environment components, lack of platform migration documentation…



Madrid, 4th-7th of June 2012
Defect injection: where, how, when…

• The diagnosis has to identify the causes that inject quality defects.
  Under what circumstances, when in the development lifecycle, what is
  so particular about the process, the methodology, the teams, the tools
  or the approach that makes some defects to be injected in the software
  ending up in the product.
• Example: If I‟m suffering performance problems in the SQL sentences,
  how do we get to this problem? How can I prevent it?
• The true causes are in the process, the commitment and knowledge of
  the software quality teams, the development and QA practices applied.
• This analysis yields to propose specific actions to improve the quality
  process.




Madrid, 4th-7th of June 2012
Technical quality issues

                                Reliability
                                Efficiency
                                Maintainability




Madrid, 4th-7th of June 2012
Technical quality: Reliability
                       (robustness)
  Reliable software = Software that is not prone to errors or problems (crashes,
   infinite loops, performance degradation, sporadic errors, etc.)
  Usual suspects: defects (design, implementation) that yield to memory leaks,
   deadlocks, race conditions, data inconsistency, inefficient use of resources,
   inadequate error management.
  Process causes: Reactive approach, lack of reliability testing and detection
   techniques, regression errors, development and QA teams inexperience
                                                 Trends:
                                                 •   Better detection and testing tools.
                                                 •   Frameworks with better
                                                     monitoring, transaction support and fault
                                                     tolerance capabilities.
                                                 •   Easy to use APIs.


http://www.ece.cmu.edu/~koopman/des_s99/sw_reliability/


Madrid, 4th-7th of June 2012
Technical quality: efficiency
•   Specification issues: If there are non functional requirements specifying the
    expected load, the response times nor the expected operation rates, there is
    nothing to test or to comply to… until the user complaints start coming.
•   Design flaws: excessive granularity in the remote interfaces, lack of
    optimization like cache use, unnecessary I/O.
•   Implementations flaws: too many locks or concurrency serialization, incorrect
    use of the APIs. An unpredicted infinite loop is simultaneously an availability
    and performance problem.
•   Database problems (lack of indices or inefficient SQL). DB programming needs
    specialists.
•   To assure efficiency you need knowledge about performance and the
    optimization of the software technology used, performance
    requirements, specific tests (load, stress, performance) and other techniques
    (like profiling and design and code reviews).




Madrid, 4th-7th of June 2012
Technical quality: Maintainability
• A hidden characteristic for the final user.. Unless the user has to pay
  the corrective/perfective maintenance cost
• A significant amount of the software cost is due to maintenance.
  Corrective maintenance is important but statistically perfective
  maintenance consumes a bigger portion of the cake.
• Any maintainability improvement will have an important impact on
  software cost and the response to demand changes.
• Software maintenance depends on the organization, the infrastructure
  and how the software is built.
• “The software is not usually designed for change. Even when it
  complies with the operational specification, its design and
  documentation is usually poor for maintenance”. Review “The
  Lehmann laws” [*]
• In times of crisis, is it useful to do preventive maintenance to improve
  this characteristic and reduce cost?

                     [*] http://en.wikipedia.org/wiki/Software_evolution


Madrid, 4th-7th of June 2012
How are the software cost distributed
• The % of the overall development cost used for maintenance has
  increased from 50% in the „70s to up to 90% today. By maintenance
  type:
     –   Corrective: Fix software problems (20%)
     –   Adaptative: Adapt the software to a changing environment (25%)
     –   Perfective: Update the software with new functionality (50%)
     –   Preventive: Improve the software make it easier to maintain, or detect
         problems not identified by users (5%)
• Almost half of the maintenance time is used in the analysis (SWEBOK)
     –   50% understanding/analysis, 10% design, 10% documentation, 5% implementation, 25% testing

• Functional changes: 7% of the size (FP) / year
• Problems concentrate: Typically it takes x5 times more effort to
  maintain problematic modules.



Madrid, 4th-7th of June 2012
Maintenance problems root causes

  • Software systems are built under deadline pressure, functionality is
    the priority and, if there is time, performance, reliability and usability
  • Design does not have change into account. The necessary features
    for a later software modification are overlooked:
       – Requirements rarely take this into account.
       – Logical architecture is not designed for change.
       – Maintainability is not measured, hence it is not improved before delivery.
       – Inadequate approach of the maintenance process.
       – Poor documentation about the evolution of the software (beyond the
         historical change log), of the design or software structure.
       – Knowledge loss due to people rotation.
       – “Maintenance = mechanic task for junior developers”.




Madrid, 4th-7th of June 2012
What can you do?
                                Action plan
                                Technical quality “upsell”




Madrid, 4th-7th of June 2012
Effectiveness/efficiency quality metrics
•   Let‟s use a classic metric (IBM, 1973): Defect removal (DRR)
     – % of detected, identified and removed defects before a given phase (or release).
       The average DDR is around 85% in USA (IBM).
     – Defect detection rate (DDR) is a little above higher (92%)
                                        Potential defects per FP, by size (FP)
                       Defects source
                                          10      100    1,000 10,000 100,000
                        Requirements    0.25     0.75      1.00      1.25      1.50
                         Architecture   0.05     0.10      0.25      0.50      0.75
                               Design   0.50     1.00      1.25      1.50      1.75
                         Source code    1.65     1.70      1.75      2.00      2.10
                                Tests   1.25     1.50      1.85      2.00      2.10
                          Documents     0.60     0.65      0.70      0.75      0.80
                            Database    1.50     2.00      2.75      3.00      3.20
                         Web content    1.25     1.50      1.75      2.00      2.25
                               TOTAL    7.05     9.20    11.30     13.00      14.45

                     Source: Applied Software Measurement, C. Jones

•   Target (measurable, even reachable): DRR > 95%
•   The metric needs an estimation of the potential defects . There are models that
    offer a defect estimation by system size (in FP or other volume metric), the
    defects fixed in each phase can be counted directly. This way you cam
    measure DRR/RDD.
                                             Potential defects = FPE, where E depends on the project and team

Madrid, 4th-7th of June 2012
Action plan (in times of economic
                     crisis…)
• Actions have to be necessarily short term and prioritized:
    – Outsource part of the quality management.
    – Centralize quality management (look for scale economy).
    – Train the development teams to avoid repeating errors, or be more efficient
      solving the quality problems during maintenance.
    – Apply cheap technical solutions (technical quality analyzers).
    – Refactor software with higher priority, where the immediate benefit is higher.
    – Detect and remove “dead code”, to reduce volume of code to maintain.
    – Introduce efficient techniques to solve quality problems with higher impact:
      Design / code inspections, requirements review, special testing, coding
      standards, etc.
    – Forget about reaching a high test automatization rates (or try to automate
      some type of tests if its future maintenance cost allows it).




Madrid, 4th-7th of June 2012
How much do the defects cost
 • The cost of a defect grows as we move along in the life
   cycle. This fact encourages a strategy of early detection
   and correction, or even prevention.
 • The quality problems underlying cost is not only the cost
   of correcting them…
     –   But sorter development time,
     –   Less development and maintenance cost,
     –   Less project failure probability,
     –   Less contractual problems,
     –   Higher user satisfaction
 • Cf. The Economics of Software Quality - Capers Jones &
   Olivier Bonsignour.

Madrid, 4th-7th of June 2012
Scale economy for quality techniques
• The bigger the size of the software, the bigger the impact due to lack of
  quality. Size increases the unitary cost (FP), but using prevention
  defect removal techniques before testing, yield to higher cost reduction
  for bigger sizes.
                                                                                  Cost reduction
              Size (FP)         Low quality    Medium quality     High quality
                                                                                    (BC)->AC)

                           10            688               625              594       14%

                          100            886               787              748       16%

                     1,000             1,040               920              847       19%

                    10,000             2,393              2,380           1,872       22%

                  100,000              5,078              4,340           3,819       25%

          Cost in $ per function point.
          Source: The Economics of Software Quality, C. Jones, O. Bonsignour

• All types of testing reach up to 85% effectiveness (15% of defects end
  up in the released version). High quality demand prevention and
  correction techniques before testing (> 95% effectiveness).


Madrid, 4th-7th of June 2012
Quality toolbox: prevention techniques

•   Reuse of certified components
•   [*] Training and software quality awareness
•   [*] Formal inspections or reviews. http://www.gilb.com/Inspection
•   Specialized techniques: Poka-Yoke (ポカヨケ “error proof”), QFD
•   [*] Prototyping
•   Patterns (design, requirements, code, architecture, documentation…)
•   [*] Quality analysis (and management) tools
•   [*] Include quality score in status reports
•   Agile development(TDD, pair programming)


           In these times of economic crisis, implement practices that
                     produce value in the short/mid term [*]

Madrid, 4th-7th of June 2012
“Quality debt”: How to “upsell” technical
                       quality
• Quality debt: Measures the necessary cost and effort to fix detected
  quality problems (acquired debt, that has to be returned in the future with
  accrued interests)
• State the lack of technical quality in economic terms is not very
  accurate, but it gets a positive effect and help justify the preventive work




               (*) Similar term: technical debt, introduced by Ward Cunningham in1992.
                         http://www.martinfowler.com/bliki/TechnicalDebt.html

Madrid, 4th-7th of June 2012
Estimate software quality is a difficult task




Madrid, 4th-7th of June 2012
www.dilbert.com (2)




Madrid, 4th-7th of June 2012
Final recommendations
• Soul search. Where do you have higher impact quality problems, their
  root cause, how were they introduced and what can you do to
  detect/fix/prevent them.
    – After this evaluation, build an action prioritized action plan that takes into
      account the organization‟s principles
• Up the ladder awareness: “upsell” the immediate and tangible benefits
  that quality improvement offers, and that it should be a priority one
  objective in times of economic crisis.
    – Historical data, quality debt metrics, benchmarking, learn from others
    – The metric “cost per defect” is not useful in economic terms
• Quality innovation. Old approaches may not be valid in the actual
  context..
    – Other classical practices, like inspection, are still valid; if the soul search
      dictates them, use them.
    – You can even propose a complete change in the paradigm:
      http://googletesting.blogspot.com/2011/01/how-google-tests-software.html


Madrid, 4th-7th of June 2012
Situations that reduce quality
 • Not structures requirement definition; lack of volume metrics
 • Inexistence of any kind of risk analysis
 • Lack of explicit quality rules; not include internal quality in the project
   status reports
 • High requirement change rate + inadequate requirement change
   management
 • Excessive final user or project managers pressure
 • Inexperience on how to avoid common defects (i.e. reliability,
   performance, security, maintainability…)
 • The attitude “Only the quality we can afford” (usually nothing)


     Any improvement actions you may implement will easily outweighed by
                a nasty combination of these “negative forces”

Madrid, 4th-7th of June 2012

Process Improvement for better Software Technical Quality under Global Crisis Scenario

  • 1.
    Process Improvement forbetter Software Technical Quality under Global Crisis scenario Luis Rodriguez Berzosa – lrodriguez@optimyth.com Madrid, 4th-7th of June 2012
  • 2.
    Software Quality undera Global Crisis Scenario? • Current situation: Economic crisis – Decrease in new SW development projects investment. Higher % of IT budget for Maintenance (corrective and perfective). – Existing software has to work better and must admit change with less cost. – Less resources for software quality. • Software is more and more ubiquitous – Who doesn‟t use a mobile device and mobile apps, or a social network, or electronic transaction systems with government agencies or their bank? – How many process are NOT supported by software? – How many key infrastructure of a country are NOT based on software systems? – How many organizations could shutdown their information systems and not disappear? • Even though software is one of the most widely used products of human civilization, it shows shocking failure rates, more than any other human invention mainly due to lack of quality. Madrid, 4th-7th of June 2012
  • 3.
    Recipes to tacklethe crisis. Manager‟s view Subprime crisis global financial crisis  real economy crisis  recession  ? • Well known recipes. Size reduction • Important: accept the situation, analyze the causes, accurate diagnosis, and propose effective measures in short term plan. • Objective: Stay alive when the crisis is over. Madrid, 4th-7th of June 2012
  • 4.
    Software Quality trends(2011) 1. Quality is not a software development add-on, but an integral part of the process from the beginning. It is not an expense: High quality levels are related with shorter than average development time and cost. 2. Software development/maintenance creates business risk that should be deleted by independent quality assurance. 3. Higher use of open source frameworks. 4. Use of cloud solutions to substitute ‘in-house’ solutions. 5. Quality = Testing + … not only testing. • The # of defects is a quality indicator but not the only one • Wider angle approach: verify the compliance with standards and normatives, avoid common coding and configuration , detect design anti- patterns defects or not compliance with architectural normatives, make software easier to maintain, and in general technically measure other characteristics (efficiency, portability, usability, reliability…) QA InfoTech, 2011 - http://www.prweb.com/releases/2011/01/prweb4959164.htm Madrid, 4th-7th of June 2012
  • 5.
    The gap widens… •… between current software systems complexity and software quality assurance technology “in the broad sense” – Test automation has been evolving for 20 years, but hasn‟t been able to advance to cope with current software platforms. The cost to maintain the automated tests has not diminish, on the contrary1. – Static code analysis technology has been around for even longer, but the number of languages, frameworks, middleware and architectural models have grown exponentially. The static analyzers haven‟t been able to keep the pace. – The knowledge on software quality metrics, security vulnerabilities, techniques to facilitate future maintenance and so on, haven‟t changed much in the past 20 years. It is true that the acknowledgement that there are quality problems has grown, but that is not enough. – Organization's software complexity has grown, but not at the same rate as the use of tools to generate a software element catalog and the dependencies between them. (1) http://www.origsoft.com/_assets/client/docs/pdf/whitepapers/throw_away_test_automation.pdf Madrid, 4th-7th of June 2012
  • 6.
    Maintenance: What kindof changes can you afford? Fuente: Boehm B., “Software Engineering Economics”, Prentice Hall (1981). Madrid, 4th-7th of June 2012
  • 7.
  • 8.
    Analyzing Software Qualityissues  Possible framework  Situation analysis  Diagnostics, by quality characteristic Madrid, 4th-7th of June 2012
  • 9.
    Let‟s try todefine Software Quality • Software quality is much more than the absence of known defects (the goal of testing) • Different types classes: functional quality, technical quality, process quality, quality in-use, service quality (support), subjective quality (user satisfaction), quality against standards (compliancy), legal quality (SLA, contracts) • Key aspects that our definition must include: – Ease to accommodate changes (in the business, in the environment, in the technology, in the load, in the regulation…) – Must be predictable, measurable (in economic terms), accommodate new characteristics and environments (virtualization and cloud). – It must include the benefit of quality for the business, not only the implicit cost of its treatment. Madrid, 4th-7th of June 2012
  • 10.
    Classic framework: ISO9126 / 25000 (SQuaRe) Este marco sigue vigente, pero se cambia el énfasis en otras características. Madrid, 4th-7th of June 2012
  • 11.
    Situation analysis: “soulsearching” • What are the software quality issues? – Software does not fulfill user nor business expectations (functionality). – Bad requirements, not up to date, badly defined (functionality)). – Software has security issues (Functionality)). – Software is not stable. Under specific circumstances, issues arise or the system stops being reliable (reliability). – Software is difficult to use and users and less productive (usability). – Users only use a reduce set of functionality and are prompt to errors when they try other characteristics (usability). – Software requires frequent hardware updates or response times soar under heavy loads (efficiency). – Software changes are costly and there is a high risk of regression errors (maintainability). – To test the changes you need a lot of resources (maintainability). – Software has to be maintained for a legacy platform and it is hard to integrate with other systems (portability). – Software also has “toxic asset”. Madrid, 4th-7th of June 2012
  • 12.
    Diagnostic: root cause •We are looking for root causes of the software quality problems: • Functionality problems: Badly designed requirements or not up to date. Not reviewed requirements. No user acceptance tests. No product security audits. Development teams don‟t have software security knowledge… • Reliability problems: Concurrency defects (race conditions or sync failures), transactional control failures, memory leaks, wrong error control, failover environments not supported, no technical tests to detect these problems… • Usability problems: Product definition did not consider user interface, wrong user interface components used, lack of documentation, UI not intuitive enough, complex and/or limited configuration, no user testing. Madrid, 4th-7th of June 2012
  • 13.
    Diagnostic (2) • Efficiencyproblems: In efficient DB queries, unused available physical resources (memory, CPU), unnecessary contention, excessive granularity in external interfaces, no performance related requirements, no performance tests, development teams don‟t have optimization techniques knowledge… • Maintainability problems: Not well documented code, unnecessary complexity or excessive coupling, no public well documented API, the initial design is no longer valid due to continuous patches and modifications, no regression tests, lack of a development normative to assure future maintenance… • Portability problems: Excessive coupling with the execution environment, lack of API to ease integration with other systems, no portability requirements, the design doesn't allow to replace environment components, lack of platform migration documentation… Madrid, 4th-7th of June 2012
  • 14.
    Defect injection: where,how, when… • The diagnosis has to identify the causes that inject quality defects. Under what circumstances, when in the development lifecycle, what is so particular about the process, the methodology, the teams, the tools or the approach that makes some defects to be injected in the software ending up in the product. • Example: If I‟m suffering performance problems in the SQL sentences, how do we get to this problem? How can I prevent it? • The true causes are in the process, the commitment and knowledge of the software quality teams, the development and QA practices applied. • This analysis yields to propose specific actions to improve the quality process. Madrid, 4th-7th of June 2012
  • 15.
    Technical quality issues  Reliability  Efficiency  Maintainability Madrid, 4th-7th of June 2012
  • 16.
    Technical quality: Reliability (robustness)  Reliable software = Software that is not prone to errors or problems (crashes, infinite loops, performance degradation, sporadic errors, etc.)  Usual suspects: defects (design, implementation) that yield to memory leaks, deadlocks, race conditions, data inconsistency, inefficient use of resources, inadequate error management.  Process causes: Reactive approach, lack of reliability testing and detection techniques, regression errors, development and QA teams inexperience Trends: • Better detection and testing tools. • Frameworks with better monitoring, transaction support and fault tolerance capabilities. • Easy to use APIs. http://www.ece.cmu.edu/~koopman/des_s99/sw_reliability/ Madrid, 4th-7th of June 2012
  • 17.
    Technical quality: efficiency • Specification issues: If there are non functional requirements specifying the expected load, the response times nor the expected operation rates, there is nothing to test or to comply to… until the user complaints start coming. • Design flaws: excessive granularity in the remote interfaces, lack of optimization like cache use, unnecessary I/O. • Implementations flaws: too many locks or concurrency serialization, incorrect use of the APIs. An unpredicted infinite loop is simultaneously an availability and performance problem. • Database problems (lack of indices or inefficient SQL). DB programming needs specialists. • To assure efficiency you need knowledge about performance and the optimization of the software technology used, performance requirements, specific tests (load, stress, performance) and other techniques (like profiling and design and code reviews). Madrid, 4th-7th of June 2012
  • 18.
    Technical quality: Maintainability •A hidden characteristic for the final user.. Unless the user has to pay the corrective/perfective maintenance cost • A significant amount of the software cost is due to maintenance. Corrective maintenance is important but statistically perfective maintenance consumes a bigger portion of the cake. • Any maintainability improvement will have an important impact on software cost and the response to demand changes. • Software maintenance depends on the organization, the infrastructure and how the software is built. • “The software is not usually designed for change. Even when it complies with the operational specification, its design and documentation is usually poor for maintenance”. Review “The Lehmann laws” [*] • In times of crisis, is it useful to do preventive maintenance to improve this characteristic and reduce cost? [*] http://en.wikipedia.org/wiki/Software_evolution Madrid, 4th-7th of June 2012
  • 19.
    How are thesoftware cost distributed • The % of the overall development cost used for maintenance has increased from 50% in the „70s to up to 90% today. By maintenance type: – Corrective: Fix software problems (20%) – Adaptative: Adapt the software to a changing environment (25%) – Perfective: Update the software with new functionality (50%) – Preventive: Improve the software make it easier to maintain, or detect problems not identified by users (5%) • Almost half of the maintenance time is used in the analysis (SWEBOK) – 50% understanding/analysis, 10% design, 10% documentation, 5% implementation, 25% testing • Functional changes: 7% of the size (FP) / year • Problems concentrate: Typically it takes x5 times more effort to maintain problematic modules. Madrid, 4th-7th of June 2012
  • 20.
    Maintenance problems rootcauses • Software systems are built under deadline pressure, functionality is the priority and, if there is time, performance, reliability and usability • Design does not have change into account. The necessary features for a later software modification are overlooked: – Requirements rarely take this into account. – Logical architecture is not designed for change. – Maintainability is not measured, hence it is not improved before delivery. – Inadequate approach of the maintenance process. – Poor documentation about the evolution of the software (beyond the historical change log), of the design or software structure. – Knowledge loss due to people rotation. – “Maintenance = mechanic task for junior developers”. Madrid, 4th-7th of June 2012
  • 21.
    What can youdo?  Action plan  Technical quality “upsell” Madrid, 4th-7th of June 2012
  • 22.
    Effectiveness/efficiency quality metrics • Let‟s use a classic metric (IBM, 1973): Defect removal (DRR) – % of detected, identified and removed defects before a given phase (or release). The average DDR is around 85% in USA (IBM). – Defect detection rate (DDR) is a little above higher (92%) Potential defects per FP, by size (FP) Defects source 10 100 1,000 10,000 100,000 Requirements 0.25 0.75 1.00 1.25 1.50 Architecture 0.05 0.10 0.25 0.50 0.75 Design 0.50 1.00 1.25 1.50 1.75 Source code 1.65 1.70 1.75 2.00 2.10 Tests 1.25 1.50 1.85 2.00 2.10 Documents 0.60 0.65 0.70 0.75 0.80 Database 1.50 2.00 2.75 3.00 3.20 Web content 1.25 1.50 1.75 2.00 2.25 TOTAL 7.05 9.20 11.30 13.00 14.45 Source: Applied Software Measurement, C. Jones • Target (measurable, even reachable): DRR > 95% • The metric needs an estimation of the potential defects . There are models that offer a defect estimation by system size (in FP or other volume metric), the defects fixed in each phase can be counted directly. This way you cam measure DRR/RDD. Potential defects = FPE, where E depends on the project and team Madrid, 4th-7th of June 2012
  • 23.
    Action plan (intimes of economic crisis…) • Actions have to be necessarily short term and prioritized: – Outsource part of the quality management. – Centralize quality management (look for scale economy). – Train the development teams to avoid repeating errors, or be more efficient solving the quality problems during maintenance. – Apply cheap technical solutions (technical quality analyzers). – Refactor software with higher priority, where the immediate benefit is higher. – Detect and remove “dead code”, to reduce volume of code to maintain. – Introduce efficient techniques to solve quality problems with higher impact: Design / code inspections, requirements review, special testing, coding standards, etc. – Forget about reaching a high test automatization rates (or try to automate some type of tests if its future maintenance cost allows it). Madrid, 4th-7th of June 2012
  • 24.
    How much dothe defects cost • The cost of a defect grows as we move along in the life cycle. This fact encourages a strategy of early detection and correction, or even prevention. • The quality problems underlying cost is not only the cost of correcting them… – But sorter development time, – Less development and maintenance cost, – Less project failure probability, – Less contractual problems, – Higher user satisfaction • Cf. The Economics of Software Quality - Capers Jones & Olivier Bonsignour. Madrid, 4th-7th of June 2012
  • 25.
    Scale economy forquality techniques • The bigger the size of the software, the bigger the impact due to lack of quality. Size increases the unitary cost (FP), but using prevention defect removal techniques before testing, yield to higher cost reduction for bigger sizes. Cost reduction Size (FP) Low quality Medium quality High quality (BC)->AC) 10 688 625 594 14% 100 886 787 748 16% 1,000 1,040 920 847 19% 10,000 2,393 2,380 1,872 22% 100,000 5,078 4,340 3,819 25% Cost in $ per function point. Source: The Economics of Software Quality, C. Jones, O. Bonsignour • All types of testing reach up to 85% effectiveness (15% of defects end up in the released version). High quality demand prevention and correction techniques before testing (> 95% effectiveness). Madrid, 4th-7th of June 2012
  • 26.
    Quality toolbox: preventiontechniques • Reuse of certified components • [*] Training and software quality awareness • [*] Formal inspections or reviews. http://www.gilb.com/Inspection • Specialized techniques: Poka-Yoke (ポカヨケ “error proof”), QFD • [*] Prototyping • Patterns (design, requirements, code, architecture, documentation…) • [*] Quality analysis (and management) tools • [*] Include quality score in status reports • Agile development(TDD, pair programming) In these times of economic crisis, implement practices that produce value in the short/mid term [*] Madrid, 4th-7th of June 2012
  • 27.
    “Quality debt”: Howto “upsell” technical quality • Quality debt: Measures the necessary cost and effort to fix detected quality problems (acquired debt, that has to be returned in the future with accrued interests) • State the lack of technical quality in economic terms is not very accurate, but it gets a positive effect and help justify the preventive work (*) Similar term: technical debt, introduced by Ward Cunningham in1992. http://www.martinfowler.com/bliki/TechnicalDebt.html Madrid, 4th-7th of June 2012
  • 28.
    Estimate software qualityis a difficult task Madrid, 4th-7th of June 2012
  • 29.
  • 30.
    Final recommendations • Soulsearch. Where do you have higher impact quality problems, their root cause, how were they introduced and what can you do to detect/fix/prevent them. – After this evaluation, build an action prioritized action plan that takes into account the organization‟s principles • Up the ladder awareness: “upsell” the immediate and tangible benefits that quality improvement offers, and that it should be a priority one objective in times of economic crisis. – Historical data, quality debt metrics, benchmarking, learn from others – The metric “cost per defect” is not useful in economic terms • Quality innovation. Old approaches may not be valid in the actual context.. – Other classical practices, like inspection, are still valid; if the soul search dictates them, use them. – You can even propose a complete change in the paradigm: http://googletesting.blogspot.com/2011/01/how-google-tests-software.html Madrid, 4th-7th of June 2012
  • 31.
    Situations that reducequality • Not structures requirement definition; lack of volume metrics • Inexistence of any kind of risk analysis • Lack of explicit quality rules; not include internal quality in the project status reports • High requirement change rate + inadequate requirement change management • Excessive final user or project managers pressure • Inexperience on how to avoid common defects (i.e. reliability, performance, security, maintainability…) • The attitude “Only the quality we can afford” (usually nothing) Any improvement actions you may implement will easily outweighed by a nasty combination of these “negative forces” Madrid, 4th-7th of June 2012