SlideShare a Scribd company logo
The CRASH Report - 2011/12 • Summary of Key Findings




              Contents

              Introduction.................................................................... 1
                   Overview..................................................................................................................... 1
                   The Sample.................................................................................................................. 1
                   Terminology................................................................................................................ 3

              PART I: Adding to Last Year’s Insights............................. 4
                   Finding 1—COBOL Applications Show Higher Security Scores................................. 4
                   Finding 2—Performance Scores Lower in Java-EE....................................................... 6
                   Finding 3—Modularity Tempers the Effect of Size on Quality..................................... 7
                   Finding 4—Maintainability Lowest in Government Applications................................ 9
                   Finding 5—No Structural Quality Difference Due to Sourcing or Shoring................ 13

              PART II: New Insights This Year.................................... 13
                   Finding 6—Development Methods Affect Structural Quality ................................... 14
                   Finding 7— Structural Quality Decline with Velocity............................................... 15
                   Finding 8—Security Scores Lowest in IT Consulting................................................ 16
                   Finding 9—Maintainability Declines with Number of Users..................................... 17
                   Finding 10—Average $3.61 of Technical Debt per LOC........................................... 18

              PART III: Technical Debt .............................................. 18
                   Finding 11— Majority of Technical Debt Impacts Cost and Adaptability.................. 20
                   Finding 12—Technical Debt is Highest in Java-EE................................................... 21
                   Future Technical Debt Analyses................................................................................. 21

              Concluding Comments.................................................. 22
The CRASH Report - 2011/12 • Summary of Key Findings



                      Introduction
                      Overview                                            The Sample

365 million           This is the second annual report produced           The data in this report are drawn from the
lines of code         by CAST on global trends in the structural          Appmarq benchmarking repository main-
                      quality of business applications software.          tained by CAST, comprised of 745 applica-
745 applica-          These reports highlight trends in five struc-       tions submitted by 160 organizations for the
tions                 tural quality characteristics—Robustness,           analysis and measurement of their structural
160 organiza-         Security, Performance, Transferability, and
                      Changeability—across technology domains
                                                                          quality characteristics, representing 365
                                                                          MLOC (million lines of code) or 11.3 mil-
tions                 and industry segments. Structural quality           lion Backfired Function Points. These orga-
                      refers to the engineering soundness of the          nizations are located primarily in the United
                      architecture and coding of an application           States, Europe, and India. This data set is
                      rather than to the correctness with which           almost triple the size of last year’s sample
                      it implements the customer’s functional re-         of 288 applications from 75 organizations
                      quirements. Evaluating an application for           comprising 108 MLOC.
                      structural quality defects is critical since they
                      are difficult to detect through standard test-      The sample is widely distributed across size
                      ing, and are the defects most likely to cause       categories and appears representative of the
                      operational problems such as outages, per-          types of applications in business use. Figure
                      formance degradation, breaches by unau-             1 displays the distribution of these applica-
                      thorized users, or data corruption.                 tions over eight size categories measured in
                                                                          lines of code. The applications range from
                      This summary report provides an objec-              10 KLOC (kilo or thousand lines of code)
                      tive, empirical foundation for discussing the       to just over 11 MLOC. This distribution
                      structural quality of software applications         includes 24% less than 50 KLOC, 33% be-
                      throughout industry and government. It              tween 50 KLOC and 200 KLOC, 31% be-
                      highlights some key findings from a com-            tween 201 KLOC and 1 MLOC, and 12%
                      plete report that will provide deeper analysis      over 1 MLOC.
                      of the structural quality characteristics and
                      their trends across industry segments and           As is evident in Table 1, almost half of the
                      technologies. The full report will also pres-       sample (46%) consists of Java-EE applica-
                      ent the most frequent violations of good ar-        tions, while .NET, ABAP, COBOL, and
                      chitectural and coding practice in each tech-       Oracle Forms each constituted between 7%
                      nology domain. You can request details on           and 11% of the sample. Applications with a
                      the full report at:                                 significant mix of two or more technologies
                      http://research.castsoftware.com.                   constituted 16% of the sample.




                                                                                                                    1
The CRASH Report - 2011/12 • Summary of Key Findings




                       Figure 1. Distribution of Applications by Size Categories

                                      160                                     149
                                      140
                                                      119     121    122
                                      120
                          Frequency




                                      100
                                                                                         82      86
                                      80
                                      60      60

                                      40
                                      20
                                                                                                           7
                                       0
                                             10-20   20-50   50-100 100-200 200-500 500-1K 1K-5K          >5K
                                                      Kilo (thousands) of Lines of Code (KLOC)


                       As shown in Table 1, there are 10 industry          ufacturing and IT consulting, while CO-
                       segments represented in the 160 organiza-           BOL applications were concentrated most
                       tions that submitted applications to the Ap-        heavily in financial services and insurance.
                       pmarq repository. Some trends that can be           Java-EE applications accounted for one-
                       observed in these data include the heaviest         third to one-half of the applications in each
                       concentration of ABAP applications in man-          industry segment.


Table 1. Applications Grouped by Technology and Industry Segments

                                                                    Mixed Oracle Oracle      Visual
Industry          .NET ABAP             C     C++ Cobol Java-EE     Tech Forms CRM/ERP Other Basic              Total
Energy&Utilities    3    5              0      0    0     26          3     0      1     2     0                 40
FinancialServices   5    0              0      2   39     46         50     3      0     4     1                 150
Insurance          10    0              1      1   21     27          5     1      2     0     2                  70
IT Consulting      11   11              2      2   13     51          6     0      6     1     6                 109
Manufacturing       8   19              3      2    4     46          7     0      2     1     2                  94
Other               3    2              1      2    1     11          9     1      0     0     0                  30
Government          0    9              1      0    0     25          7    34      0     0     2                  78
Retail            5         5           2       0     2       11      5       0        1        1       0        32
Technology        4         1           0       0     0       14      1       0        0        1       0        21
Telecom           2         7           4       0     0       82     24       0        0        1       1        121
Total             51        59          14      9     80      339    117      39       12       11      14       745

                                                                                                                        2
The CRASH Report - 2011/12 • Summary of Key Findings




                    This sample differs in important charac-          The quality characteristics are attributes that
                    teristics from last year’s sample, including      affect:
                    a higher proportion of large applications         Robustness: The stability of an application
                    and a higher proportion of Java-EE. Con-          and the likelihood of introducing defects
                    sequently, it will not be possible to establish   when modifying it.
                    year-on-year trends by comparing this year’s      Performance: The efficiency of the software
                    findings to those reported last year. As the      layer of the application.
                    number and diversity of applications in the       Security: An application’s ability to prevent
                    Appmarq repository grows and their relative       unauthorized intrusions.
                    proportions stabilize, we anticipate report-      Transferability: The ease with which a new
                    ing year-on-year trends in future reports.        team can understand the application and
                                                                      quickly become productive working on it.
                                                                      Changeability: An application’s ability to be
                    Terminology                                       easily and quickly modified.

                    LOC: Lines of code. The size of an applica-       We also measure:
                    tion is frequently reported in KLOC (kilo or      Total Quality Index: A composite score com-
                    thousand lines of code) or MLOC (million          puted from the five quality characteristics
                    lines of code).                                   listed above.

                    Structural Quality: The non-functional            Technical Debt: Technical Debt represents
                    quality of a software application that indi-      the effort required to fix violations of good
                    cates how well the code is written from an        architectural and coding practices that re-
                    engineering perspective. It is sometimes          main in the code when an application is
                    referred to as technical quality or internal      released. Technical Debt is calculated only
                    quality, and represents the extent to which       on violations that the organization intends
                    the application is free from violations of        to remediate. Like financial debt, technical
                    good architectural or coding practices.           debt incurs interest in the form of extra costs
                                                                      accruing for a violation until it is remedi-
                    Structural Quality Characteristics: This          ated, such as the effort required to modify
                    report concentrates on the five structural        the code or inefficient use of hardware or
                    quality characteristics defined below. The        network resources.
                    scores are computed on a scale of 1 (high
                    risk) to 4 (low risk) by analyzing the applica-   Violations: A structure in the source code
                    tion for violations against a set of good cod-    that is inconsistent with good architectural
                    ing and architectural practices, and using an     or coding practices and has proven to cause
                    algorithm that weights the severity of each       problems that affect either the cost or risk
                    violation and its relevance to each individual    profile of an application.
                    quality characteristic.




                                                                                                                  3
The CRASH Report - 2011/12 • Summary of Key Findings



                      PART I: Adding to Last Year’s Insights
                      Finding 1—COBOL Applications                      where high security for confidential finan-
                      Show Higher Security Scores                       cial information is mandated. These scores
The distributi-
                                                                        should not be surprising since COBOL ap-
on of security        The distribution of Security scores across        plications run in mainframe environments
scores sug-           the current Appmarq sample is presented           where they are not as exposed to the security
gests some            in Figure 2. The bi-modal distribution of         challenges of the internet. In addition, these
industry seg-         Security scores indicates that applications       are typically the oldest applications in our
                      can be grouped into two distinct types: one       sample and have likely undergone more ex-
ments pay
                      group that has very high scores and a sec-        tensive remediation for security vulnerabili-
more attention        ond group with moderate scores and a long         ties over time.
to security           tail toward poor scores. The distribution
                      of Security scores is wider than for any of       The lower Security scores for other types of
                      the other quality characteristics, suggesting     applications are surprising. In particular,
                      strong variations in the attention paid to se-    .NET applications received some of the low-
                      curity among different types of applications      est Security scores. These data suggest that
                      or industry segments.                             attention to security may be focused primar-
                                                                        ily on applications governed by regulatory
                      Further analysis on the data presented in         compliance or protection of financial data,
                      Figure 3 revealed that applications with          while less attention is paid to security in
                      higher Security scores continue to be pre-        other types of applications.
                      dominantly large COBOL applications in
                      the financial services and insurance sectors



                      Figure 2. Distribution of Security Scores

                                       80



                                       60
                           Frequency




                                       40



                                       20



                                        0
                                            1.0           2.0                    3.0                     4.0
                                                                  Security Scores
                                                                                                                   4
The CRASH Report - 2011/12 • Summary of Key Findings




                          Figure 3. Security Scores by Technology

                                Low
                                     4.0
                                Risk

  Max

                                                    3.0
                                   Security Score



        75th Percentile
        Median
        25th Percentile

  Min
                                                    2.0



                             High
                                  1.0
                             Risk                             .NET   C    C++      COBOL   Java EE   Oracle   Oracle   Visual
                                                                                                     Forms    ERP      Basic
                                                                                    Technology




                          Figure 4. Distribution of Performance Scores

                                            80



                                            60
                            Frequency




                                            40



                                            20



                                                    0
                                                        1.0              2.0                   3.0                     4.0
                                                                                Peformance Scores




                                                                                                                                5
The CRASH Report - 2011/12 • Summary of Key Findings




                    Finding 2—Performance Scores                          scores than other languages. Modern devel-
                    Lower in Java-EE                                      opment languages such as Java-EE are gen-
                                                                          erally more flexible and allow developers to
                    As displayed in Figure 4, Performance scores          create dynamic constructs that can be riskier
                    were widely distributed, and in general are           in operation. This flexibility is an advantage
                    skewed with the highest concentration to-             that has encouraged their adoption, but can
                    wards better performance. These data were             also be a drawback that results in less pre-
                    produced through software analysis and do             dictable system behavior. In addition, de-
                    not constitute a dynamic analysis of an ap-           velopers who have mastered Java-EE may
                    plication’s behavior or actual performance in         still have misunderstandings about how it
                    use. These scores reflect detection of viola-         interacts with other technologies or frame-
                    tions of good architectural or coding prac-           works in the application such as Hibernate
                    tices that may have performance implica-              or Struts. Generally, low scores on a qual-
                    tions in operation, such as the existence of          ity characteristic often reflect not merely
                    expensive calls in loops that operate on large        the coding within a technology, but also the
                    data tables.                                          subtleties of how language constructs inter-
                                                                          act with other technology frameworks in the
                    Further analysis of the data presented in             application and therefore violate good archi-
                    Figure 5 revealed that Java-EE applications           tectural and coding practices.
                    received significantly lower Performance



                    Figure 5. Performance Scores by Technology

                        Low 4.0
                        Risk
                                              3.5
                          Performance Score




                                              3.0

                                              2.5

                                              2.0
                        High
                        Risk 1.5                    .NET   C   C++   COBOL   Java EE   Oracle   Oracle    Visual
                                                                                       Forms    ERP       Basic
                                                                       Technology




                                                                                                                     6
The CRASH Report - 2011/12 • Summary of Key Findings




                      Finding 3—Modularity Tempers                                   applications is that COBOL was designed
                      the Effect of Size on Quality                                  long before the strong focus on modularity
A negative                                                                           in software design. Consequently, COBOL
correlation           Appmarq data contradicts the common be-                        applications are constructed with many large
between size          lief that the quality of an application neces-                 and complex components. More recent lan-
and quality           sarily degrades as it grows larger. Across the                 guages encourage modularity and other tech-
is evident for        full Appmarq sample, the Total Quality In-                     niques that control the amount of complex-
                      dex (a composite of the five quality charac-                   ity added as applications grow larger. For
COBOL appli-          teristic scores) failed to correlate significant-              instance, Figure 7 reveals that the percent-
cations               ly with the size of applications. However,                     age of highly complex components (compo-
                      after breaking the sample into technology                      nents with high Cyclomatic Complexity and
                      segments, we found that the Total Quality                      strong coupling to other components) in
                      Index did correlate negatively with the size                   COBOL applications is much higher than
                      of COBOL applications as is evident in Fig-                    in other languages, while this percentage is
                      ure 6, where the data are plotted on a loga-                   lower for the newer object-oriented technol-
                      rithmic scale to improve the visibility of the                 ogies like Java-EE and .NET, consistent with
                      correlation. The negative correlation indi-                    object-oriented principles. However, high
                      cates that variations in the size of COBOL                     levels of modularity may present a partial
                      applications accounts for 11% of the varia-                    explanation of the lower Performance scores
                      tion in the Total Quality Index (R2 = .11).                    in Java-EE applications discussed in Finding
                                                                                     2, as modularity could adversely impact the
                      One explanation for the negative correla-                      application’s performance.
                      tion between size and quality in COBOL



                      Figure 6. Correlation of Total Quality Index with Size of COBOL Applications


                                                    3.5
                        Total Quality Index Score




                                                                                           ,
                                                    3.0




                                                    2.5
                                                          10          100                 1000                   10000
                                                               COBOL Application Size (Kilo Lines of Code)

                                                                                                                              7
The CRASH Report - 2011/12 • Summary of Key Findings




                    The increased complexity of components                                         greater size compared with components in
                    in COBOL is consistent with their much                                         other languages. Figure 8 displays the aver-


                    Figure 7. Percentage of Components that are Highly Complex in Applications
                            by Technology

                                                                   100%
                    Percentage Of Highly Complex




                                                                   80%
                        Objects in Applications




                                                                   60%


                                                                   40%


                                                                   20%


                                                                    0%
                                                                          .NET   C   C++   COBOL     Java EE   Oracle   Oracle   Visual
                                                                                                               Forms    ERP      Basic

                                                                                            Technology

                    Figure 8. Average Object Size Comparison Across Different Technologies

                                                                   2000
                             Average Object Size - Lines of Code




                                                                   1500



                                                                   1000



                                                                    500



                                                                      0
                                                                          .NET   C   C++   COBOL     Java EE   Oracle   Oracle   Visual
                                                                                                               Forms    ERP      Basic

                                                                                           Technology
                                                                                                                                            8
The CRASH Report - 2011/12 • Summary of Key Findings




                    age number of components per KLOC for              Finding 4—Maintainability Low-
                    applications developed in each of the tech-        est in Government Applications
                    nologies. While the average component size
                    for most development technologies in the           Transferability and Changeability are critical
                    Appmarq repository is between 20 to 50             components of an application’s cost of own-
                    LOC, the average COBOL component is                ership, and scores for these quality character-
                    usually well over 600 LOC.                         istics in the Appmarq sample are presented
                                                                       in Figures 9 and 10. The spread of these
                    Measurements and observations of COBOL             distributions suggest different costs of own-
                    applications in the Appmarq repository sug-        ership for different segments of this sample.
                    gest that they are structurally different from
                    components developed in other technolo-            When Transferability and Changeability
                    gies, both in size and complexity. Conse-          scores were compared by industry segment,
                    quently we do not believe that COBOL ap-           the results presented in Figure 11 for Trans-
                    plications should be directly benchmarked          ferability revealed that scores for govern-
                    against other technologies because compari-        ment applications were lower than those
                    sons may be misleading and mask important          for other segments. The results for Change-
                    findings related to comparisons among oth-         ability were similar, although the differences
                    er, more similar technologies. Although we         between government and other industry seg-
                    will continue reporting COBOL with other           ments were not as pronounced. This sample
                    technologies in this report, we will identify      includes government applications from both
                    any analyses where COBOL applications              the United States and European Union.
                    skew the results.



                    Figure 9. Distribution of Transferability Scores

                                     120



                                     80
                         Frequency




                                     40



                                      0
                                           2.0                      3.0                                  4.0
                                                           Transferability Scores

                                                                                                                   9
The CRASH Report - 2011/12 • Summary of Key Findings




                    Figure 10. Distribution of Changeability Scores

                                                    80
                         Frequency




                                                    40




                                                     0
                                                         1.5                                              2.0                                 3.0                           3.5                      4.0
                                                                                                                            Changeability Scores



                    Figure 11. Transferability Scores by Industry Segment

                           Low
                                4.0
                           Risk
                            Transferability Score




                                                     3.5


                                                    3.0
                                                               Energey & Utilities


                                                                                     Financial Services




                                                                                                                                               Manufacturing
                                                                                                                              IT Consulting




                                                    2.5
                                                                                                                                                               Government




                                                                                                                                                                                     Technology
                                                                                                                Insurance




                                                                                                                                                                                                  Telecom
                                                                                                                                                                            Retail




                        High
                        Risk 2.0
                                                                                                                                              Industry




                                                                                                                                                                                                            10
The CRASH Report - 2011/12 • Summary of Key Findings




                      Although we do not have cost data, these                                                                           through contracted work, compared to 50%
                      results suggest that government agencies are                                                                       of the applications in the private sector be-
Government            spending significantly more of their IT bud-                                                                       ing obtained through outsourcing. Multiple
applications          gets on maintaining existing applications                                                                          contractors working on the same applica-
have the most         than on creating new functionality. Not                                                                            tion over time, disincentives in contracts,
chronic com-          surprisingly, the Gartner 2011 IT Staffing                                                                         contractors not having to maintain the code
plexity profiles      & Spending report stated that the govern-                                                                          at their own cost, and immature acquisi-
                      ment sector spends about 73% of its budget                                                                         tion practices are potential explanations for
                      on maintenance, higher than any other seg-                                                                         the lower Transferability and Changeability
                      ment.                                                                                                              scores on government applications. Regard-
                                                                                                                                         less of the cause, Figure 12 indicates that
                      The lower Transferability and Changeability                                                                        when COBOL applications are removed
                      scores for government agencies may partially                                                                       from the sample, government applications
                      result from unique application acquisition                                                                         have the highest proportion of complex
                      conditions. In the Appmarq sample, 75%                                                                             components in the Appmarq sample.
                      of government applications were acquired



                      Figure 12. Complexity of Components (Not Including COBOL)

                                                         35%
                        Percentage of High Complex Ob-




                                                         30%
                                                                                                                                                      Government
                              jects in Applications




                                                         25%
                                                                                     Financial Services




                                                         20%
                                                               Energey & Utilities




                                                                                                                                      Manufacturing




                                                         15%
                                                                                                                      IT Consulting




                                                                                                                                                                                         Telecom
                                                                                                          Insurance




                                                         10%
                                                                                                                                                                            Technology
                                                                                                                                                                   Retail




                                                         5%

                                                         0%
                                                                                                                            Industry Segment




                                                                                                                                                                                                   11
The CRASH Report - 2011/12 • Summary of Key Findings




                    Compared to Transferability scores, the                   est Changeability scores since most ABAP
                    Changeability scores exhibited an even wid-               code customizes commercial off-the-shelf
                    er distribution indicating that they may be               SAP systems.
                    affected by factors other than industry seg-
                    ment. Figure 13 presents Changeability                    The lowest Changeability scores were seen
                    scores by technology type, and shows ABAP,                in applications written in C, a language that
                    COBOL, and Java-EE had higher Change-                     allows great flexibility in development, but
                    ability scores than other technologies. It is             apparently sacrifices ease of modification.
                    not surprising that ABAP achieved the high-



                    Figure 13. Changeability Scores by Technology

                         Low
                              4.0
                         Risk

                                                3.5
                          Changeability Score




                                                3.0

                                                2.5

                                                2.0

                        High
                             1.5
                        Risk                          .NET   ABAP   C   C++   COBOL Java EE   Oracle   Oracle   Visual
                                                                                              Forms    ERP      Basic
                                                                          Technology




                                                                                                                         12
The CRASH Report - 2011/12 • Summary of Key Findings



                      PART II: New Insights This Year
                      Finding 5—No Structural Qual-
                      ity Difference Due to Sourcing
Variations in                                                           Figure 14. Total Quality Index Scores
                      or Shoring                                        for Inhouse vs. Outsourced
quality are not
explained by          The Appmarq sample was analyzed based
                                                                         Low Risk 4.0
sourcing mo-          on whether applications were managed by




                                                                              Total Quality Index Score
del alone             inhouse or outsourced resources. A slightly
                      larger proportion of the applications were                                          3.5
                      developed by outsourced resources (n=390)
                      compared to inhouse resources (n=355).
                                                                                                          3.0
                      Figure 14 presents data comparing inhouse
                      and outsourced applications, showing no
                      difference between their Total Quality Index                                        2.5
                      scores. This finding of no significant differ-
                      ences was also observed for each of the in-
                                                                        High Risk 2.0
                      dividual quality characteristic scores. One                                               Inhouse   Outsourced
                      possible explanation for these findings is that
                      many of the outsourced applications were
                      initially developed inhouse before being out-
                                                                        Figure 15. Total Quality Index Scores
                      sourced for maintenance. Consequently, it is
                                                                        for Onshore vs. Offshore
                      not unexpected that their structural quality
                      characteristics are similar to those whose
                      maintenance remained inhouse.                      Low Risk 4.0
                                                                              Total Quality Index Score




                      Similar findings were observed for applica-                                         3.5
                      tions developed onshore versus offshore.
                      Most of the applications in the Appmarq
                      sample were developed onshore (n=585)                                               3.0
                      even if outsourced. As is evident in Figure
                      15, no significant differences were detected                                        2.5
                      in the Total Quality Index between onshore
                      and offshore applications. There were also
                      no differences observed among each of the         High Risk 2.0
                                                                                                                Onshore    Offshore
                      individual quality characteristic scores.




                                                                                                                                      13
The CRASH Report - 2011/12 • Summary of Key Findings




                                                                        Finding 6—Development Meth-                                                                          for applications that reported using waterfall
                                                                        ods Affect Structural Quality                                                                        methods are higher than those using agile/
Quality lowest                                                                                                                                                               iterative methods, as displayed in Figures
with custom                                                             The five quality characteristics were analyzed                                                       16b and 16c. This trend was stronger for
development                                                             for differences between the development                                                              Changeability than for Transferability. In
methods,                                                                method used on each of the applications.                                                             both cases, the trend for a mix of agile and
and Waterfall                                                           For the 204 applications that reported their                                                         waterfall methods was closer to the trend for
                                                                        development method, the most frequently                                                              agile than to the trend for waterfall.
scores highest                                                          reported methods fell into four categories:
in Changeabili-                                                         agile/iterative methods (n=63), waterfall                                                            It appears from these data that applications
ty and Transfe-                                                         (n=54), agile/waterfall mix (n=40), and cus-                                                         developed with agile methods are nearly as
rability                                                                tom methods developed for each project                                                               effective as waterfall at managing the struc-
                                                                        (n=47). As is evident in Figure 16a, scores                                                          tural quality affecting business risk (Robust-
                                                                        for the Total Quality Index were lowest for                                                          ness, Performance, and Security), but less so
                                                                        applications developed using custom meth-                                                            at managing the structural quality factors af-
                                                                        ods rather than relying on a more established                                                        fecting cost (Transferability and Changeabil-
                                                                        method. Similar trends were observed for all                                                         ity). The agile methods community refers
                                                                        of the quality characteristics except Transfer-                                                      to structural quality as managing Technical
                                                                        ability.                                                                                             Debt, a topic we will discuss in Part III.

                                                                        The Transferability and Changeability scores



Figure 16a. Total Quality Index                                                              Figure 16b. Transferability Scores                                                         Figure 16c. Changeability Scores
Scores by Development Methods                                                                by Development Methods                                                                     by Development Methods

 Low
 Risk 3.5                                                                                                            3.5                                                                                       3.5
  Total Quality Index Score




                                                                                             Transferability Score




                                                                                                                                                                                         Changeability Score




                              3.0                                                                                    3.0                                                                                       3.0



                              2.5                                                                                    2.5                                                                                       2.5
                                                                  Agile/Waterfall




                                                                                                                                                                                                                                                   Agile/Waterfall
                                                                                                                                                         Agile/Waterfall
                                    Agile/Iterative




                                                                                                                                                                                                                     Agile/Iterative
                                                                                                                           Agile/Iterative
                                                      Waterfall




                                                                                                                                                                                                                                       Waterfall
                                                                                                                                             Waterfall
                                                                                    Custom




                                                                                                                                                                                                                                                                     Custom
                                                                                                                                                                           Custom




High
     2.0                                                                                                             2.0                                                                                       2.0
Risk

                                                                                                                           Development Methods


                                                                                                                                                                                                                                                                     14
The CRASH Report - 2011/12 • Summary of Key Findings




                    Finding 7— Scores Decline with
                    More Frequent Releases

                    The five quality characteristics were analyzed
                    based on the number of releases per year for                 Figure 17a. Robustness Scores by
                    each of the applications. The 319 applica-                   Number of Releases per Year
                    tions that reported the number of releases
                    per year were grouped into three categories:                  Low
                                                                                       4.0
                                                                                  Risk
                    one to three releases (n=140), four to six re-
                    leases (n=114), and more than six releases
                    (n=59). As shown in Figure 17a, 17b, and




                                                                                    Robustness Score
                    17c, scores for Robustness, Security, and                                              3.0
                    Changeability declined as the number of re-
                    leases grew, with the trend most pronounced
                    for Security. Similar trends were not ob-                                              2.0
                    served for Performance and Transferability.
                    In this sample most of the applications with
                    six or more releases per year were reported to                High
                    have been developed using custom methods,                          1.0
                                                                                  Risk
                                                                                                                            1 to 3       3 to 6      More than
                    and the sharp decline for projects with more                                                           per year                  6 per year
                                                                                                                                        per year
                    than six releases per year may be due in part                                                           Major Releases per Year
                    to less effective development methods.



                    Figure 17b. Security Scores by Num-                           Figure 17c. Changeability Scores by
                    ber of Releases per Year                                      Number of Releases per Year

                      Low                                                              Low 4.0
                           4.0
                      Risk                                                             Risk
                                                                                               Changeability Score




                                                                                                                     3.0
                       Security Score




                                        3.0



                                        2.0                                                                          2.0



                     High                                                           High
                          1.0                                                            1.0
                     Risk                                                           Risk                                      1 to 3                   More than
                                               1 to 3     3 to 6    More than                                                              3 to 6
                                              per year   per year   6 per year                                               per year     per year     6 per year

                                               Major Releases per Year                                                         Major Releases per Year
                                                                                                                                                     15
The CRASH Report - 2011/12 • Summary of Key Findings




                    Finding 8—Security Scores                                                                                  of the IT consulting data indicated that the
                    Lowest in IT Consulting                                                                                    lower Security scores were primarily char-
                                                                                                                               acteristic of applications that had been out-
                    As is evident in Figure 18, Security scores are                                                            sourced to them by customers. In essence,
                    lower in IT consulting than in other indus-                                                                IT consulting companies were receiving ap-
                    try segments. These results did not appear to                                                              plications from their customers for mainte-
                    be caused by technology, since IT consulting                                                               nance that already contained significantly
                    displayed one of the widest distributions of                                                               more violations of good security practices.
                    technologies in the sample. Deeper analysis



                    Figure 18. Security Scores by Industry Segment

                        Low
                        Risk 4.0
                          Security Score




                                           3.0
                                                 Energy & Utilities


                                                                      Financial Services




                                           2.0
                                                                                                                       Manufacturing
                                                                                                       IT Consulting




                                                                                                                                       Governement




                                                                                                                                                              Technology
                                                                                           Insurance




                                                                                                                                                                           Telecom
                                                                                                                                                     Retail



                       High
                            1.0
                       Risk
                                                                                                           Industry Segment




                                                                                                                                                                                     16
The CRASH Report - 2011/12 • Summary of Key Findings




                    Finding 9—Maintainability De-                                        as the number of users grew. Similar trends
                    clines with Number of Users                                          were not observed for Robustness, Perfor-
                                                                                         mance, or Security. A possible explanation
                    The five quality characteristics were analyzed                       for these trends is that applications with a
                    to detect differences based on the number of                         higher number of users are subject to more
                    users for each of the 207 applications that re-                      frequent modifications, putting a premium
                    ported usage data. Usage levels were grouped                         on Transferability and Changeability for
                    into 500 or less (n=38), 501 to 1000 (n=43),                         rapid turnaround of requests for defect fixes
                    1001 to 5000 (n=26), and greater than 5000                           or enhancements. Also, the most mission
                    (n=100). Figures 19a and 19b show scores                             critical applications rely on most rigid (wa-
                    for Transferability and Changeability rose                           terfall like) processes.



                    Figure 19a. Transferability by Number                                Figure 19b. Changeability by Number
                    of Application Users                                                 of Application Users

                      Low                                                                   Low
                           4.0                                                                   4.0
                      Risk                                                                  Risk

                                               3.5                                                                 3.5
                       Transferability Score




                                                                                             Changeability Score

                                                                                                                   3.0
                                               3.0

                                               2.5
                                                                                                                   2.5
                                               2.0                                                                 2.0
                     High                                                                  High
                          1.5                                                                   1.5
                     Risk                                                                  Risk
                                                     500 or   501 to   1001 to Greater                                   500 or   501 to   1001 to Greater
                                                      Less    1000      5000 than 5000                                    Less    1000      5000 than 5000
                                                         Number of End Users                                                 Number of End Users




                                                                                                                                                        17
The CRASH Report - 2011/12 • Summary of Key Findings



                      PART III: Technical Debt
                      Finding 10—Average $3.61 of                       remediated at each level of severity, the time
                      Technical Debt per LOC                            required to fix a violation, and the burdened
This report
                                                                        hourly rate for a developer. This formula for
takes a very          Technical Debt represents the effort re-          calculating the Technical Debt of an applica-
conservative          quired to fix problems that remain in the         tion is presented on the following page.
approach to           code when an application is released. Since
quantifying           it is an emerging concept, there is little ref-   To evaluate the average Technical Debt
                      erence data regarding the Technical Debt in       across the Appmarq sample, we first calcu-
Technical Debt
                      a typical application. The CAST Appmarq           lated the Technical Debt per line of code for
                      benchmarking repository provides a unique         each of the individual applications. These
                      opportunity to calculate Technical Debt           individual application scores were then aver-
                      across different technologies, based on the       aged across the Appmarq sample to produce
                      number of engineering flaws and violations        an average Technical Debt of $3.61 per line
                      of good architectural and coding practices in     of code. Consequently, a typical application
                      the source code. These results can provide a      accrues $361,000 of Technical Debt for each
                      frame of reference for the application devel-     100,000 LOC, and applications of 300,000
                      opment and maintenance community.                 or more LOC carry more than $1 million of
                                                                        Technical Debt ($1,083,000). The cost of
                      Since IT organizations will not have the          fixing Technical Debt is a primary contribu-
                      time or resources to fix every problem in the     tor to an application’s cost of ownership, and
                      source code, we calculate Technical Debt as       a significant driver of the high cost of IT.
                      a declining proportion of violations based
                      on their severity. In our method, at least        This year’s Technical Debt figure of $3.61 is
                      half of the high severity violations will be      larger than the 2010 figure of $2.82. How-
                      prioritized for remediation, while only a         ever, this difference cannot be interpreted
                      small proportion of the low severity viola-       as growth of Technical Debt by nearly one
                      tions will be remediated. We developed a          third over the past year. This difference is
                      parameterized formula for calculating the         at least in part, and quite probably in large
                      Technical Debt of an application with very        part, a result of a change in the mix of appli-
                      conservative assumptions about parameter          cations included in the current sample.
                      values such as the percent of violations to be




                                                                                                                    18
The CRASH Report - 2011/12 • Summary of Key Findings




                                               Technical Debt Calculation

                         Our approach for calculating Technical Debt is defined below:

                            1.	 The density of coding violations per thousand lines of code
                               (KLOC) is derived from source code analysis using the CAST Ap-
                               plication Intelligence Platform. The coding violations highlight is-
                               sues around Security, Performance, Robustness, Transferability,
                               and Changeability of the code.

                            2.	 Coding violations are categorized into low, medium, and high
                               severity violations. In developing the estimate of Technical Debt,
                               it is assumed that only 50% of high severity problems, 25% of
                               moderate severity problems, and 10% of low severity problems
                               will ultimately be corrected in the normal course of operating the
                               application.

                            3.	 To be conservative, we assume that low, moderate, and high se-
                               verity problems would each take one hour to fix, although industry
                               data suggest these numbers should be higher and in many cases
                               is much higher, especially when the fix is applied during operati-
                               on. We assumed developer cost at an average burdened rate of
                               $75 per hour.

                            4.	 Technical Debt is therefore calculated using the following formula:
                               Technical Debt = (10% of Low Severity Violations + 25% of Medi-
                               um Severity Violations + 50% of High Severity Violations) * No. of
                               Hours to Fix * Cost/Hr.




                                                                                                      19
The CRASH Report - 2011/12 • Summary of Key Findings




                      Finding 11— Majority of Tech-                          to the three characteristics associated with
                      nical Debt Impacts Cost and                            risk (Robustness, Performance, and Securi-
Only one third        Adaptability                                           ty) is much lower in C, C++, COBOL, and
of Technical                                                                 Oracle ERP. Technical Debt related to Ro-
Debt carries          Figure 20 displays the amount of Technical             bustness is proportionately higher in ABAP,
immediate             Debt attributed to violations that affect each         Oracle Forms, and Visual Basic.
business risks        of the quality characteristics. Seventy per-
                      cent of the Technical Debt was attributed to
                      violations that affect IT cost: Transferability        Figure 20. Technical Debt by Quali-
                      and Changeability. The other thirty percent            ty Characteristics for the Complete
                      involved violations that affect risks to the           Appmarq Sample
                      business: Robustness, Performance, and Se-
                      curity.

                      Similar to the findings in the complete sam-                                   Robustness
                                                                                  Changeability        18%
                      ple, in each of the technology platforms the                   30%
                                                                                                                     Performance
                                                                                                                          5%
                      cost factors of Transferability and Change-
                                                                                                            Security
                      ability accounted for the largest proportion
                                                                                                              7%
                      of Technical Debt. This trend is shown in
                                                                                        Transferability
                      Figure 21, which displays the spread of Tech-                         40%
                      nical Debt across quality characteristics for
                      each language. However, it is notable that
                      the proportion of Technical Debt attributed



                      Figure 21. Technical Debt by Quality Characteristics for Each Language


                                  9%
                                                                                13%
                                                                      16%
                         36%                                                                       34%
                                           35%
                                                            38%
                                                                                                                  Changeability
                                                   44%                                  44%
                                  48%                                                                             Transferability
                                                                                47%

                                                                                                                  Security
                         30%               45%              41%       63%               47%        34%
                                  2% 0%
                                                                                1%                                Performance
                                                                                7%
                                                   40%
                          9%                                                                        6%
                                                                                                    3%            Robustness
                                  42%                        3%
                          8%               4%                           5%
                                                             1%
                                           3%                           3%                         23%
                                                    7%
                         17%               13%      2%      17%       12%       32%      1% 0%
                                                    7%                                   8%


                        .NET    ABAP       C      C++     COBOL Java EE       Oracle   Oracle     Visual
                                                                              Forms    ERP        Basic
                                                         Technology
                                                                                                                                20
The CRASH Report - 2011/12 • Summary of Key Findings




                    Finding 12—Technical Debt is                             During the past two years we have chosen
                    Highest in Java-EE                                       parameter values based on the previously
                                                                             described conservative assumptions. In the
                    Technical Debt was analyzed within each of               future, we anticipate changing these values
                    the development technologies. As shown in                based on more accurate industry data on
                    Figure 22, Java-EE had the highest Techni-               average time to fix violations and strategies
                    cal Debt scores, averaging $5.42 per LOC.                for determining which violations to fix. The
                    Java-EE also had the widest distribution of              Technical Debt results presented in this re-
                    Technical Debt scores, although scores for               port are suggestive of industry trends based
                    .NET and Oracle Forms were also widely                   on the assumptions in our parameter values
                    distributed. COBOL and ABAP had some                     and calculations. Although different as-
                    of the lowest Technical Debt scores.                     sumptions about the values to set for param-
                                                                             eters in our equations would produce dif-
                                                                             ferent cost results, the relative comparisons
                    Future Technical Debt Analyses                           within these data would not change, nor
                                                                             would the fundamental message that Tech-
                    The parameters used in calculating Technical             nical Debt is large and must be systemati-
                    Debt can vary across applications, compa-                cally addressed to reduce application costs,
                    nies, and locations based on factors such as             risks, and adaptability.
                    labor rates and development environments.



                    Figure 22. Technical Debt within Each Technology

                                          $15,000
                      Technical Debt ($/KLOC)




                                         $10,000



                                           $5.000



                                                $0
                                                     .NET   ABAP   C   C++       COBOL     Java EE   Oracle   Visual
                                                                                                     Forms    Basic
                                                                       Technology




                                                                                                                       21
The CRASH Report - 2011/12 • Summary of Key Findings



                    Concluding Comments
                    The findings in this report establish differ-    establish annual trends and may ultimately
                    ences in the structural quality of applica-      be able to do this within industry segments
                    tions based on differences in development        and technology groups. Appmarq is a
                    technology, industry segment, number of          benchmark repository with growing capabil-
                    users, development method, and frequency         ities that will allow the depth and quality of
                    of release. However, contrary to expecta-        our analysis and measurement of structural
                    tions, differences in structural quality were    quality to improve each year.
                    not related to the size of the application,
                    whether its development was onshore or           The observations from these data suggest
                    offshore, and whether its team was internal      that development organizations are focused
                    or outsourced. These results help us better      most heavily on Performance and Security
                    understand the factors that affect structural    in certain critical applications. Less atten-
                    quality and bust myths that lead to incorrect    tion appears to be focused on removing the
                    conclusions about the causes of structural       Transferability and Changeability problems
                    problems.                                        that increase the cost of ownership and
                                                                     reduce responsiveness to business needs.
                    These data also allows us to put actual num-     These results suggest that application devel-
                    bers to the growing discussion of Technical      opers are still mostly in reaction mode to the
                    Debt—a discussion that has suffered from         business rather than being proactive in ad-
                    a dearth of empirical evidence. While we         dressing the long term causes of IT costs and
                    make no claim that the Technical Debt fig-       geriatric applications.
                    ures in this report are definitive because of
                    the assumptions underlying our calculations,     Finally, the data and findings in this report
                    we are satisfied that these results provide a    are representative of the insights that can
                    strong foundation for continuing discussion      be gleaned by organizations who establish
                    and the development of more comprehen-           their own Application Intelligence Centers
                    sive quantitative models.                        to collect and analyze structural quality data.
                                                                     Such data provide a natural focus for the ap-
                    We strongly caution against interpreting         plication of statistical quality management
                    year-on-year trends in these data due to         and lean techniques. The benchmarks and
                    changes in the mix of applications making        insights gained from such analyses provide
                    up the sample. As the Appmarq repository         excellent input for executive governance
                    grows and the proportional mix of applica-       over the cost and risk of IT applications.
                    tions stabilizes, with time we will be able to




                                                                                                                 22
CAST Research Labs

      CAST Research Labs (CRL) was established to further the empirical study of software
      implementation in business technology. Starting in 2007, CRL has been collecting
      metrics and structural characteristics from custom applications deployed by large,
      IT-intensive enterprises across North America, Europe and India. This unique dataset,
      currently standing at approximately 745 applications, forms a basis to analyze actual
      software implementation in industry. CRL focuses on the scientific analysis of large
      software applications to discover insights that can improve their structural quality.
      CRL provides practical advice and annual benchmarks to the global application de-
      velopment community, as well as interacting with the academic community and con-
      tributing to the scientific literature.

      As a baseline, each year CRL will be publishing a detailed report of software trends
      found in our industry repository. The executive summary of the report can be down-
      loaded free of charge by clicking on the link below. The full report can be purchased
      by contacting the CAST Information Center at +1 (877) 852 2278 or visit:
      http://research.castsoftware.com.




Authors
Jay Sappidi, Sr. Director, CAST Research Labs
Dr. Bill Curtis, Senior Vice President and Chief Scientist, CAST Research Labs
Alexandra Szynkarski, Research Associate, CAST Research Labs




                                            For more information, please visit research.castsoftware.com

More Related Content

Similar to 2011/2012 CAST report on Application Software Quality (CRASH)

Alteryx Telco Use Cases
Alteryx Telco Use CasesAlteryx Telco Use Cases
Alteryx Telco Use Cases
Tridant
 
Demystifying Engineering Analytics
Demystifying Engineering AnalyticsDemystifying Engineering Analytics
Demystifying Engineering Analytics
Cognizant
 
Industrial perspective on static analysis
Industrial perspective on static analysisIndustrial perspective on static analysis
Industrial perspective on static analysis
Chirag Thumar
 
Klac icr - Pitch
Klac icr - PitchKlac icr - Pitch
Klac icr - Pitch
Pham Ngoc Tram Nguyen
 
Detailed Infrastructure Analysis PowerPoint Presentation Slides
Detailed Infrastructure Analysis PowerPoint Presentation SlidesDetailed Infrastructure Analysis PowerPoint Presentation Slides
Detailed Infrastructure Analysis PowerPoint Presentation Slides
SlideTeam
 
IRJET- A Design Approach for Basic Telecom Operation
IRJET- A Design Approach for Basic Telecom OperationIRJET- A Design Approach for Basic Telecom Operation
IRJET- A Design Approach for Basic Telecom Operation
IRJET Journal
 
Limited Budget but Effective End to End MLOps Practices (Machine Learning Mod...
Limited Budget but Effective End to End MLOps Practices (Machine Learning Mod...Limited Budget but Effective End to End MLOps Practices (Machine Learning Mod...
Limited Budget but Effective End to End MLOps Practices (Machine Learning Mod...
IRJET Journal
 
Whitepaper oracle applications_updated with new opkey logo
Whitepaper oracle applications_updated with new opkey logoWhitepaper oracle applications_updated with new opkey logo
Whitepaper oracle applications_updated with new opkey logo
ImranAhmad455575
 
CISQ and Software Quality Measurement - Software Assurance Forum (March 2010)
CISQ and Software Quality Measurement - Software Assurance Forum (March 2010)CISQ and Software Quality Measurement - Software Assurance Forum (March 2010)
CISQ and Software Quality Measurement - Software Assurance Forum (March 2010)
CISQ - Consortium for IT Software Quality
 
Advanced Production Accounting of a Flotation Plant
Advanced Production Accounting of a Flotation PlantAdvanced Production Accounting of a Flotation Plant
Advanced Production Accounting of a Flotation Plant
Alkis Vazacopoulos
 
Leveraging SAP to Support Reliability Analytics
Leveraging SAP to Support Reliability AnalyticsLeveraging SAP to Support Reliability Analytics
Leveraging SAP to Support Reliability Analytics
Mike Poland, CMRP
 
Better testing for C# software through source code analysis
Better testing for C# software through source code analysisBetter testing for C# software through source code analysis
Better testing for C# software through source code analysis
kalistick
 
Operational Infrastructure Management PowerPoint Presentation Slides
Operational Infrastructure Management PowerPoint Presentation SlidesOperational Infrastructure Management PowerPoint Presentation Slides
Operational Infrastructure Management PowerPoint Presentation Slides
SlideTeam
 
IRJET - Augmented Tangible Style using 8051 MCU
IRJET -  	  Augmented Tangible Style using 8051 MCUIRJET -  	  Augmented Tangible Style using 8051 MCU
IRJET - Augmented Tangible Style using 8051 MCU
IRJET Journal
 
IJCER (www.ijceronline.com) International Journal of computational Engineerin...
IJCER (www.ijceronline.com) International Journal of computational Engineerin...IJCER (www.ijceronline.com) International Journal of computational Engineerin...
IJCER (www.ijceronline.com) International Journal of computational Engineerin...
ijceronline
 
IJCER (www.ijceronline.com) International Journal of computational Engineerin...
IJCER (www.ijceronline.com) International Journal of computational Engineerin...IJCER (www.ijceronline.com) International Journal of computational Engineerin...
IJCER (www.ijceronline.com) International Journal of computational Engineerin...
ijceronline
 
Make Your Application “Oracle RAC Ready” & Test For It
Make Your Application “Oracle RAC Ready” & Test For ItMake Your Application “Oracle RAC Ready” & Test For It
Make Your Application “Oracle RAC Ready” & Test For It
Markus Michalewicz
 
Scada Analysis1.2
Scada Analysis1.2Scada Analysis1.2
Scada Analysis1.2
Miles Weinstein
 
Software engineering in industrial automation state of-the-art review
Software engineering in industrial automation state of-the-art reviewSoftware engineering in industrial automation state of-the-art review
Software engineering in industrial automation state of-the-art review
Tiago Oliveira
 
ProjectReport_SPCinAM
ProjectReport_SPCinAMProjectReport_SPCinAM
ProjectReport_SPCinAM
SYED QAMAR RAZA
 

Similar to 2011/2012 CAST report on Application Software Quality (CRASH) (20)

Alteryx Telco Use Cases
Alteryx Telco Use CasesAlteryx Telco Use Cases
Alteryx Telco Use Cases
 
Demystifying Engineering Analytics
Demystifying Engineering AnalyticsDemystifying Engineering Analytics
Demystifying Engineering Analytics
 
Industrial perspective on static analysis
Industrial perspective on static analysisIndustrial perspective on static analysis
Industrial perspective on static analysis
 
Klac icr - Pitch
Klac icr - PitchKlac icr - Pitch
Klac icr - Pitch
 
Detailed Infrastructure Analysis PowerPoint Presentation Slides
Detailed Infrastructure Analysis PowerPoint Presentation SlidesDetailed Infrastructure Analysis PowerPoint Presentation Slides
Detailed Infrastructure Analysis PowerPoint Presentation Slides
 
IRJET- A Design Approach for Basic Telecom Operation
IRJET- A Design Approach for Basic Telecom OperationIRJET- A Design Approach for Basic Telecom Operation
IRJET- A Design Approach for Basic Telecom Operation
 
Limited Budget but Effective End to End MLOps Practices (Machine Learning Mod...
Limited Budget but Effective End to End MLOps Practices (Machine Learning Mod...Limited Budget but Effective End to End MLOps Practices (Machine Learning Mod...
Limited Budget but Effective End to End MLOps Practices (Machine Learning Mod...
 
Whitepaper oracle applications_updated with new opkey logo
Whitepaper oracle applications_updated with new opkey logoWhitepaper oracle applications_updated with new opkey logo
Whitepaper oracle applications_updated with new opkey logo
 
CISQ and Software Quality Measurement - Software Assurance Forum (March 2010)
CISQ and Software Quality Measurement - Software Assurance Forum (March 2010)CISQ and Software Quality Measurement - Software Assurance Forum (March 2010)
CISQ and Software Quality Measurement - Software Assurance Forum (March 2010)
 
Advanced Production Accounting of a Flotation Plant
Advanced Production Accounting of a Flotation PlantAdvanced Production Accounting of a Flotation Plant
Advanced Production Accounting of a Flotation Plant
 
Leveraging SAP to Support Reliability Analytics
Leveraging SAP to Support Reliability AnalyticsLeveraging SAP to Support Reliability Analytics
Leveraging SAP to Support Reliability Analytics
 
Better testing for C# software through source code analysis
Better testing for C# software through source code analysisBetter testing for C# software through source code analysis
Better testing for C# software through source code analysis
 
Operational Infrastructure Management PowerPoint Presentation Slides
Operational Infrastructure Management PowerPoint Presentation SlidesOperational Infrastructure Management PowerPoint Presentation Slides
Operational Infrastructure Management PowerPoint Presentation Slides
 
IRJET - Augmented Tangible Style using 8051 MCU
IRJET -  	  Augmented Tangible Style using 8051 MCUIRJET -  	  Augmented Tangible Style using 8051 MCU
IRJET - Augmented Tangible Style using 8051 MCU
 
IJCER (www.ijceronline.com) International Journal of computational Engineerin...
IJCER (www.ijceronline.com) International Journal of computational Engineerin...IJCER (www.ijceronline.com) International Journal of computational Engineerin...
IJCER (www.ijceronline.com) International Journal of computational Engineerin...
 
IJCER (www.ijceronline.com) International Journal of computational Engineerin...
IJCER (www.ijceronline.com) International Journal of computational Engineerin...IJCER (www.ijceronline.com) International Journal of computational Engineerin...
IJCER (www.ijceronline.com) International Journal of computational Engineerin...
 
Make Your Application “Oracle RAC Ready” & Test For It
Make Your Application “Oracle RAC Ready” & Test For ItMake Your Application “Oracle RAC Ready” & Test For It
Make Your Application “Oracle RAC Ready” & Test For It
 
Scada Analysis1.2
Scada Analysis1.2Scada Analysis1.2
Scada Analysis1.2
 
Software engineering in industrial automation state of-the-art review
Software engineering in industrial automation state of-the-art reviewSoftware engineering in industrial automation state of-the-art review
Software engineering in industrial automation state of-the-art review
 
ProjectReport_SPCinAM
ProjectReport_SPCinAMProjectReport_SPCinAM
ProjectReport_SPCinAM
 

More from CAST

Six steps-to-enhance-performance-of-critical-systems
Six steps-to-enhance-performance-of-critical-systemsSix steps-to-enhance-performance-of-critical-systems
Six steps-to-enhance-performance-of-critical-systems
CAST
 
Application Performance: 6 Steps to Enhance Performance of Critical Systems
Application Performance: 6 Steps to Enhance Performance of Critical SystemsApplication Performance: 6 Steps to Enhance Performance of Critical Systems
Application Performance: 6 Steps to Enhance Performance of Critical Systems
CAST
 
Application Assessment - Executive Summary Report
Application Assessment - Executive Summary ReportApplication Assessment - Executive Summary Report
Application Assessment - Executive Summary Report
CAST
 
Cloud Migration: Azure acceleration with CAST Highlight
Cloud Migration: Azure acceleration with CAST HighlightCloud Migration: Azure acceleration with CAST Highlight
Cloud Migration: Azure acceleration with CAST Highlight
CAST
 
Cloud Readiness : CAST & Microsoft Azure Partnership Overview
Cloud Readiness : CAST & Microsoft Azure Partnership OverviewCloud Readiness : CAST & Microsoft Azure Partnership Overview
Cloud Readiness : CAST & Microsoft Azure Partnership Overview
CAST
 
Cloud Migration: Cloud Readiness Assessment Case Study
Cloud Migration: Cloud Readiness Assessment Case StudyCloud Migration: Cloud Readiness Assessment Case Study
Cloud Migration: Cloud Readiness Assessment Case Study
CAST
 
Digital Transformation e-book: Taking the 20X20n approach to accelerating Dig...
Digital Transformation e-book: Taking the 20X20n approach to accelerating Dig...Digital Transformation e-book: Taking the 20X20n approach to accelerating Dig...
Digital Transformation e-book: Taking the 20X20n approach to accelerating Dig...
CAST
 
Why computers will never be safe
Why computers will never be safeWhy computers will never be safe
Why computers will never be safe
CAST
 
Green indexes used in CAST to measure the energy consumption in code
Green indexes used in CAST to measure the energy consumption in codeGreen indexes used in CAST to measure the energy consumption in code
Green indexes used in CAST to measure the energy consumption in code
CAST
 
9 Steps to Creating ADM Budgets
9 Steps to Creating ADM Budgets9 Steps to Creating ADM Budgets
9 Steps to Creating ADM Budgets
CAST
 
Improving ADM Vendor Relationship through Outcome Based Contracts
Improving ADM Vendor Relationship through Outcome Based ContractsImproving ADM Vendor Relationship through Outcome Based Contracts
Improving ADM Vendor Relationship through Outcome Based Contracts
CAST
 
Drive Business Excellence with Outcomes-Based Contracting: The OBC Toolkit
Drive Business Excellence with Outcomes-Based Contracting: The OBC ToolkitDrive Business Excellence with Outcomes-Based Contracting: The OBC Toolkit
Drive Business Excellence with Outcomes-Based Contracting: The OBC Toolkit
CAST
 
CAST Highlight: Code-level portfolio analysis. FAST.
CAST Highlight: Code-level portfolio analysis. FAST.CAST Highlight: Code-level portfolio analysis. FAST.
CAST Highlight: Code-level portfolio analysis. FAST.
CAST
 
Shifting Vendor Management Focus to Risk and Business Outcomes
Shifting Vendor Management Focus to Risk and Business OutcomesShifting Vendor Management Focus to Risk and Business Outcomes
Shifting Vendor Management Focus to Risk and Business Outcomes
CAST
 
Applying Software Quality Models to Software Security
Applying Software Quality Models to Software SecurityApplying Software Quality Models to Software Security
Applying Software Quality Models to Software Security
CAST
 
The business case for software analysis & measurement
The business case for software analysis & measurementThe business case for software analysis & measurement
The business case for software analysis & measurement
CAST
 
Cast Highlight Software Maintenance Infographic
Cast Highlight Software Maintenance InfographicCast Highlight Software Maintenance Infographic
Cast Highlight Software Maintenance Infographic
CAST
 
What is system level analysis
What is system level analysisWhat is system level analysis
What is system level analysis
CAST
 
Deloitte Tech Trends 2014 Technical Debt
Deloitte Tech Trends 2014 Technical DebtDeloitte Tech Trends 2014 Technical Debt
Deloitte Tech Trends 2014 Technical Debt
CAST
 
What you should know about software measurement platforms
What you should know about software measurement platformsWhat you should know about software measurement platforms
What you should know about software measurement platforms
CAST
 

More from CAST (20)

Six steps-to-enhance-performance-of-critical-systems
Six steps-to-enhance-performance-of-critical-systemsSix steps-to-enhance-performance-of-critical-systems
Six steps-to-enhance-performance-of-critical-systems
 
Application Performance: 6 Steps to Enhance Performance of Critical Systems
Application Performance: 6 Steps to Enhance Performance of Critical SystemsApplication Performance: 6 Steps to Enhance Performance of Critical Systems
Application Performance: 6 Steps to Enhance Performance of Critical Systems
 
Application Assessment - Executive Summary Report
Application Assessment - Executive Summary ReportApplication Assessment - Executive Summary Report
Application Assessment - Executive Summary Report
 
Cloud Migration: Azure acceleration with CAST Highlight
Cloud Migration: Azure acceleration with CAST HighlightCloud Migration: Azure acceleration with CAST Highlight
Cloud Migration: Azure acceleration with CAST Highlight
 
Cloud Readiness : CAST & Microsoft Azure Partnership Overview
Cloud Readiness : CAST & Microsoft Azure Partnership OverviewCloud Readiness : CAST & Microsoft Azure Partnership Overview
Cloud Readiness : CAST & Microsoft Azure Partnership Overview
 
Cloud Migration: Cloud Readiness Assessment Case Study
Cloud Migration: Cloud Readiness Assessment Case StudyCloud Migration: Cloud Readiness Assessment Case Study
Cloud Migration: Cloud Readiness Assessment Case Study
 
Digital Transformation e-book: Taking the 20X20n approach to accelerating Dig...
Digital Transformation e-book: Taking the 20X20n approach to accelerating Dig...Digital Transformation e-book: Taking the 20X20n approach to accelerating Dig...
Digital Transformation e-book: Taking the 20X20n approach to accelerating Dig...
 
Why computers will never be safe
Why computers will never be safeWhy computers will never be safe
Why computers will never be safe
 
Green indexes used in CAST to measure the energy consumption in code
Green indexes used in CAST to measure the energy consumption in codeGreen indexes used in CAST to measure the energy consumption in code
Green indexes used in CAST to measure the energy consumption in code
 
9 Steps to Creating ADM Budgets
9 Steps to Creating ADM Budgets9 Steps to Creating ADM Budgets
9 Steps to Creating ADM Budgets
 
Improving ADM Vendor Relationship through Outcome Based Contracts
Improving ADM Vendor Relationship through Outcome Based ContractsImproving ADM Vendor Relationship through Outcome Based Contracts
Improving ADM Vendor Relationship through Outcome Based Contracts
 
Drive Business Excellence with Outcomes-Based Contracting: The OBC Toolkit
Drive Business Excellence with Outcomes-Based Contracting: The OBC ToolkitDrive Business Excellence with Outcomes-Based Contracting: The OBC Toolkit
Drive Business Excellence with Outcomes-Based Contracting: The OBC Toolkit
 
CAST Highlight: Code-level portfolio analysis. FAST.
CAST Highlight: Code-level portfolio analysis. FAST.CAST Highlight: Code-level portfolio analysis. FAST.
CAST Highlight: Code-level portfolio analysis. FAST.
 
Shifting Vendor Management Focus to Risk and Business Outcomes
Shifting Vendor Management Focus to Risk and Business OutcomesShifting Vendor Management Focus to Risk and Business Outcomes
Shifting Vendor Management Focus to Risk and Business Outcomes
 
Applying Software Quality Models to Software Security
Applying Software Quality Models to Software SecurityApplying Software Quality Models to Software Security
Applying Software Quality Models to Software Security
 
The business case for software analysis & measurement
The business case for software analysis & measurementThe business case for software analysis & measurement
The business case for software analysis & measurement
 
Cast Highlight Software Maintenance Infographic
Cast Highlight Software Maintenance InfographicCast Highlight Software Maintenance Infographic
Cast Highlight Software Maintenance Infographic
 
What is system level analysis
What is system level analysisWhat is system level analysis
What is system level analysis
 
Deloitte Tech Trends 2014 Technical Debt
Deloitte Tech Trends 2014 Technical DebtDeloitte Tech Trends 2014 Technical Debt
Deloitte Tech Trends 2014 Technical Debt
 
What you should know about software measurement platforms
What you should know about software measurement platformsWhat you should know about software measurement platforms
What you should know about software measurement platforms
 

Recently uploaded

Serial Arm Control in Real Time Presentation
Serial Arm Control in Real Time PresentationSerial Arm Control in Real Time Presentation
Serial Arm Control in Real Time Presentation
tolgahangng
 
Deep Dive: AI-Powered Marketing to Get More Leads and Customers with HyperGro...
Deep Dive: AI-Powered Marketing to Get More Leads and Customers with HyperGro...Deep Dive: AI-Powered Marketing to Get More Leads and Customers with HyperGro...
Deep Dive: AI-Powered Marketing to Get More Leads and Customers with HyperGro...
saastr
 
Your One-Stop Shop for Python Success: Top 10 US Python Development Providers
Your One-Stop Shop for Python Success: Top 10 US Python Development ProvidersYour One-Stop Shop for Python Success: Top 10 US Python Development Providers
Your One-Stop Shop for Python Success: Top 10 US Python Development Providers
akankshawande
 
Main news related to the CCS TSI 2023 (2023/1695)
Main news related to the CCS TSI 2023 (2023/1695)Main news related to the CCS TSI 2023 (2023/1695)
Main news related to the CCS TSI 2023 (2023/1695)
Jakub Marek
 
UiPath Test Automation using UiPath Test Suite series, part 6
UiPath Test Automation using UiPath Test Suite series, part 6UiPath Test Automation using UiPath Test Suite series, part 6
UiPath Test Automation using UiPath Test Suite series, part 6
DianaGray10
 
June Patch Tuesday
June Patch TuesdayJune Patch Tuesday
June Patch Tuesday
Ivanti
 
Webinar: Designing a schema for a Data Warehouse
Webinar: Designing a schema for a Data WarehouseWebinar: Designing a schema for a Data Warehouse
Webinar: Designing a schema for a Data Warehouse
Federico Razzoli
 
National Security Agency - NSA mobile device best practices
National Security Agency - NSA mobile device best practicesNational Security Agency - NSA mobile device best practices
National Security Agency - NSA mobile device best practices
Quotidiano Piemontese
 
“Building and Scaling AI Applications with the Nx AI Manager,” a Presentation...
“Building and Scaling AI Applications with the Nx AI Manager,” a Presentation...“Building and Scaling AI Applications with the Nx AI Manager,” a Presentation...
“Building and Scaling AI Applications with the Nx AI Manager,” a Presentation...
Edge AI and Vision Alliance
 
Choosing The Best AWS Service For Your Website + API.pptx
Choosing The Best AWS Service For Your Website + API.pptxChoosing The Best AWS Service For Your Website + API.pptx
Choosing The Best AWS Service For Your Website + API.pptx
Brandon Minnick, MBA
 
みなさんこんにちはこれ何文字まで入るの?40文字以下不可とか本当に意味わからないけどこれ限界文字数書いてないからマジでやばい文字数いけるんじゃないの?えこ...
みなさんこんにちはこれ何文字まで入るの?40文字以下不可とか本当に意味わからないけどこれ限界文字数書いてないからマジでやばい文字数いけるんじゃないの?えこ...みなさんこんにちはこれ何文字まで入るの?40文字以下不可とか本当に意味わからないけどこれ限界文字数書いてないからマジでやばい文字数いけるんじゃないの?えこ...
みなさんこんにちはこれ何文字まで入るの?40文字以下不可とか本当に意味わからないけどこれ限界文字数書いてないからマジでやばい文字数いけるんじゃないの?えこ...
名前 です男
 
20240609 QFM020 Irresponsible AI Reading List May 2024
20240609 QFM020 Irresponsible AI Reading List May 202420240609 QFM020 Irresponsible AI Reading List May 2024
20240609 QFM020 Irresponsible AI Reading List May 2024
Matthew Sinclair
 
Monitoring and Managing Anomaly Detection on OpenShift.pdf
Monitoring and Managing Anomaly Detection on OpenShift.pdfMonitoring and Managing Anomaly Detection on OpenShift.pdf
Monitoring and Managing Anomaly Detection on OpenShift.pdf
Tosin Akinosho
 
TrustArc Webinar - 2024 Global Privacy Survey
TrustArc Webinar - 2024 Global Privacy SurveyTrustArc Webinar - 2024 Global Privacy Survey
TrustArc Webinar - 2024 Global Privacy Survey
TrustArc
 
Recommendation System using RAG Architecture
Recommendation System using RAG ArchitectureRecommendation System using RAG Architecture
Recommendation System using RAG Architecture
fredae14
 
Introduction of Cybersecurity with OSS at Code Europe 2024
Introduction of Cybersecurity with OSS  at Code Europe 2024Introduction of Cybersecurity with OSS  at Code Europe 2024
Introduction of Cybersecurity with OSS at Code Europe 2024
Hiroshi SHIBATA
 
Best 20 SEO Techniques To Improve Website Visibility In SERP
Best 20 SEO Techniques To Improve Website Visibility In SERPBest 20 SEO Techniques To Improve Website Visibility In SERP
Best 20 SEO Techniques To Improve Website Visibility In SERP
Pixlogix Infotech
 
How to use Firebase Data Connect For Flutter
How to use Firebase Data Connect For FlutterHow to use Firebase Data Connect For Flutter
How to use Firebase Data Connect For Flutter
Daiki Mogmet Ito
 
Taking AI to the Next Level in Manufacturing.pdf
Taking AI to the Next Level in Manufacturing.pdfTaking AI to the Next Level in Manufacturing.pdf
Taking AI to the Next Level in Manufacturing.pdf
ssuserfac0301
 
Generating privacy-protected synthetic data using Secludy and Milvus
Generating privacy-protected synthetic data using Secludy and MilvusGenerating privacy-protected synthetic data using Secludy and Milvus
Generating privacy-protected synthetic data using Secludy and Milvus
Zilliz
 

Recently uploaded (20)

Serial Arm Control in Real Time Presentation
Serial Arm Control in Real Time PresentationSerial Arm Control in Real Time Presentation
Serial Arm Control in Real Time Presentation
 
Deep Dive: AI-Powered Marketing to Get More Leads and Customers with HyperGro...
Deep Dive: AI-Powered Marketing to Get More Leads and Customers with HyperGro...Deep Dive: AI-Powered Marketing to Get More Leads and Customers with HyperGro...
Deep Dive: AI-Powered Marketing to Get More Leads and Customers with HyperGro...
 
Your One-Stop Shop for Python Success: Top 10 US Python Development Providers
Your One-Stop Shop for Python Success: Top 10 US Python Development ProvidersYour One-Stop Shop for Python Success: Top 10 US Python Development Providers
Your One-Stop Shop for Python Success: Top 10 US Python Development Providers
 
Main news related to the CCS TSI 2023 (2023/1695)
Main news related to the CCS TSI 2023 (2023/1695)Main news related to the CCS TSI 2023 (2023/1695)
Main news related to the CCS TSI 2023 (2023/1695)
 
UiPath Test Automation using UiPath Test Suite series, part 6
UiPath Test Automation using UiPath Test Suite series, part 6UiPath Test Automation using UiPath Test Suite series, part 6
UiPath Test Automation using UiPath Test Suite series, part 6
 
June Patch Tuesday
June Patch TuesdayJune Patch Tuesday
June Patch Tuesday
 
Webinar: Designing a schema for a Data Warehouse
Webinar: Designing a schema for a Data WarehouseWebinar: Designing a schema for a Data Warehouse
Webinar: Designing a schema for a Data Warehouse
 
National Security Agency - NSA mobile device best practices
National Security Agency - NSA mobile device best practicesNational Security Agency - NSA mobile device best practices
National Security Agency - NSA mobile device best practices
 
“Building and Scaling AI Applications with the Nx AI Manager,” a Presentation...
“Building and Scaling AI Applications with the Nx AI Manager,” a Presentation...“Building and Scaling AI Applications with the Nx AI Manager,” a Presentation...
“Building and Scaling AI Applications with the Nx AI Manager,” a Presentation...
 
Choosing The Best AWS Service For Your Website + API.pptx
Choosing The Best AWS Service For Your Website + API.pptxChoosing The Best AWS Service For Your Website + API.pptx
Choosing The Best AWS Service For Your Website + API.pptx
 
みなさんこんにちはこれ何文字まで入るの?40文字以下不可とか本当に意味わからないけどこれ限界文字数書いてないからマジでやばい文字数いけるんじゃないの?えこ...
みなさんこんにちはこれ何文字まで入るの?40文字以下不可とか本当に意味わからないけどこれ限界文字数書いてないからマジでやばい文字数いけるんじゃないの?えこ...みなさんこんにちはこれ何文字まで入るの?40文字以下不可とか本当に意味わからないけどこれ限界文字数書いてないからマジでやばい文字数いけるんじゃないの?えこ...
みなさんこんにちはこれ何文字まで入るの?40文字以下不可とか本当に意味わからないけどこれ限界文字数書いてないからマジでやばい文字数いけるんじゃないの?えこ...
 
20240609 QFM020 Irresponsible AI Reading List May 2024
20240609 QFM020 Irresponsible AI Reading List May 202420240609 QFM020 Irresponsible AI Reading List May 2024
20240609 QFM020 Irresponsible AI Reading List May 2024
 
Monitoring and Managing Anomaly Detection on OpenShift.pdf
Monitoring and Managing Anomaly Detection on OpenShift.pdfMonitoring and Managing Anomaly Detection on OpenShift.pdf
Monitoring and Managing Anomaly Detection on OpenShift.pdf
 
TrustArc Webinar - 2024 Global Privacy Survey
TrustArc Webinar - 2024 Global Privacy SurveyTrustArc Webinar - 2024 Global Privacy Survey
TrustArc Webinar - 2024 Global Privacy Survey
 
Recommendation System using RAG Architecture
Recommendation System using RAG ArchitectureRecommendation System using RAG Architecture
Recommendation System using RAG Architecture
 
Introduction of Cybersecurity with OSS at Code Europe 2024
Introduction of Cybersecurity with OSS  at Code Europe 2024Introduction of Cybersecurity with OSS  at Code Europe 2024
Introduction of Cybersecurity with OSS at Code Europe 2024
 
Best 20 SEO Techniques To Improve Website Visibility In SERP
Best 20 SEO Techniques To Improve Website Visibility In SERPBest 20 SEO Techniques To Improve Website Visibility In SERP
Best 20 SEO Techniques To Improve Website Visibility In SERP
 
How to use Firebase Data Connect For Flutter
How to use Firebase Data Connect For FlutterHow to use Firebase Data Connect For Flutter
How to use Firebase Data Connect For Flutter
 
Taking AI to the Next Level in Manufacturing.pdf
Taking AI to the Next Level in Manufacturing.pdfTaking AI to the Next Level in Manufacturing.pdf
Taking AI to the Next Level in Manufacturing.pdf
 
Generating privacy-protected synthetic data using Secludy and Milvus
Generating privacy-protected synthetic data using Secludy and MilvusGenerating privacy-protected synthetic data using Secludy and Milvus
Generating privacy-protected synthetic data using Secludy and Milvus
 

2011/2012 CAST report on Application Software Quality (CRASH)

  • 1.
  • 2. The CRASH Report - 2011/12 • Summary of Key Findings Contents Introduction.................................................................... 1 Overview..................................................................................................................... 1 The Sample.................................................................................................................. 1 Terminology................................................................................................................ 3 PART I: Adding to Last Year’s Insights............................. 4 Finding 1—COBOL Applications Show Higher Security Scores................................. 4 Finding 2—Performance Scores Lower in Java-EE....................................................... 6 Finding 3—Modularity Tempers the Effect of Size on Quality..................................... 7 Finding 4—Maintainability Lowest in Government Applications................................ 9 Finding 5—No Structural Quality Difference Due to Sourcing or Shoring................ 13 PART II: New Insights This Year.................................... 13 Finding 6—Development Methods Affect Structural Quality ................................... 14 Finding 7— Structural Quality Decline with Velocity............................................... 15 Finding 8—Security Scores Lowest in IT Consulting................................................ 16 Finding 9—Maintainability Declines with Number of Users..................................... 17 Finding 10—Average $3.61 of Technical Debt per LOC........................................... 18 PART III: Technical Debt .............................................. 18 Finding 11— Majority of Technical Debt Impacts Cost and Adaptability.................. 20 Finding 12—Technical Debt is Highest in Java-EE................................................... 21 Future Technical Debt Analyses................................................................................. 21 Concluding Comments.................................................. 22
  • 3. The CRASH Report - 2011/12 • Summary of Key Findings Introduction Overview The Sample 365 million This is the second annual report produced The data in this report are drawn from the lines of code by CAST on global trends in the structural Appmarq benchmarking repository main- quality of business applications software. tained by CAST, comprised of 745 applica- 745 applica- These reports highlight trends in five struc- tions submitted by 160 organizations for the tions tural quality characteristics—Robustness, analysis and measurement of their structural 160 organiza- Security, Performance, Transferability, and Changeability—across technology domains quality characteristics, representing 365 MLOC (million lines of code) or 11.3 mil- tions and industry segments. Structural quality lion Backfired Function Points. These orga- refers to the engineering soundness of the nizations are located primarily in the United architecture and coding of an application States, Europe, and India. This data set is rather than to the correctness with which almost triple the size of last year’s sample it implements the customer’s functional re- of 288 applications from 75 organizations quirements. Evaluating an application for comprising 108 MLOC. structural quality defects is critical since they are difficult to detect through standard test- The sample is widely distributed across size ing, and are the defects most likely to cause categories and appears representative of the operational problems such as outages, per- types of applications in business use. Figure formance degradation, breaches by unau- 1 displays the distribution of these applica- thorized users, or data corruption. tions over eight size categories measured in lines of code. The applications range from This summary report provides an objec- 10 KLOC (kilo or thousand lines of code) tive, empirical foundation for discussing the to just over 11 MLOC. This distribution structural quality of software applications includes 24% less than 50 KLOC, 33% be- throughout industry and government. It tween 50 KLOC and 200 KLOC, 31% be- highlights some key findings from a com- tween 201 KLOC and 1 MLOC, and 12% plete report that will provide deeper analysis over 1 MLOC. of the structural quality characteristics and their trends across industry segments and As is evident in Table 1, almost half of the technologies. The full report will also pres- sample (46%) consists of Java-EE applica- ent the most frequent violations of good ar- tions, while .NET, ABAP, COBOL, and chitectural and coding practice in each tech- Oracle Forms each constituted between 7% nology domain. You can request details on and 11% of the sample. Applications with a the full report at: significant mix of two or more technologies http://research.castsoftware.com. constituted 16% of the sample. 1
  • 4. The CRASH Report - 2011/12 • Summary of Key Findings Figure 1. Distribution of Applications by Size Categories 160 149 140 119 121 122 120 Frequency 100 82 86 80 60 60 40 20 7 0 10-20 20-50 50-100 100-200 200-500 500-1K 1K-5K >5K Kilo (thousands) of Lines of Code (KLOC) As shown in Table 1, there are 10 industry ufacturing and IT consulting, while CO- segments represented in the 160 organiza- BOL applications were concentrated most tions that submitted applications to the Ap- heavily in financial services and insurance. pmarq repository. Some trends that can be Java-EE applications accounted for one- observed in these data include the heaviest third to one-half of the applications in each concentration of ABAP applications in man- industry segment. Table 1. Applications Grouped by Technology and Industry Segments Mixed Oracle Oracle Visual Industry .NET ABAP C C++ Cobol Java-EE Tech Forms CRM/ERP Other Basic Total Energy&Utilities 3 5 0 0 0 26 3 0 1 2 0 40 FinancialServices 5 0 0 2 39 46 50 3 0 4 1 150 Insurance 10 0 1 1 21 27 5 1 2 0 2 70 IT Consulting 11 11 2 2 13 51 6 0 6 1 6 109 Manufacturing 8 19 3 2 4 46 7 0 2 1 2 94 Other 3 2 1 2 1 11 9 1 0 0 0 30 Government 0 9 1 0 0 25 7 34 0 0 2 78 Retail 5 5 2 0 2 11 5 0 1 1 0 32 Technology 4 1 0 0 0 14 1 0 0 1 0 21 Telecom 2 7 4 0 0 82 24 0 0 1 1 121 Total 51 59 14 9 80 339 117 39 12 11 14 745 2
  • 5. The CRASH Report - 2011/12 • Summary of Key Findings This sample differs in important charac- The quality characteristics are attributes that teristics from last year’s sample, including affect: a higher proportion of large applications Robustness: The stability of an application and a higher proportion of Java-EE. Con- and the likelihood of introducing defects sequently, it will not be possible to establish when modifying it. year-on-year trends by comparing this year’s Performance: The efficiency of the software findings to those reported last year. As the layer of the application. number and diversity of applications in the Security: An application’s ability to prevent Appmarq repository grows and their relative unauthorized intrusions. proportions stabilize, we anticipate report- Transferability: The ease with which a new ing year-on-year trends in future reports. team can understand the application and quickly become productive working on it. Changeability: An application’s ability to be Terminology easily and quickly modified. LOC: Lines of code. The size of an applica- We also measure: tion is frequently reported in KLOC (kilo or Total Quality Index: A composite score com- thousand lines of code) or MLOC (million puted from the five quality characteristics lines of code). listed above. Structural Quality: The non-functional Technical Debt: Technical Debt represents quality of a software application that indi- the effort required to fix violations of good cates how well the code is written from an architectural and coding practices that re- engineering perspective. It is sometimes main in the code when an application is referred to as technical quality or internal released. Technical Debt is calculated only quality, and represents the extent to which on violations that the organization intends the application is free from violations of to remediate. Like financial debt, technical good architectural or coding practices. debt incurs interest in the form of extra costs accruing for a violation until it is remedi- Structural Quality Characteristics: This ated, such as the effort required to modify report concentrates on the five structural the code or inefficient use of hardware or quality characteristics defined below. The network resources. scores are computed on a scale of 1 (high risk) to 4 (low risk) by analyzing the applica- Violations: A structure in the source code tion for violations against a set of good cod- that is inconsistent with good architectural ing and architectural practices, and using an or coding practices and has proven to cause algorithm that weights the severity of each problems that affect either the cost or risk violation and its relevance to each individual profile of an application. quality characteristic. 3
  • 6. The CRASH Report - 2011/12 • Summary of Key Findings PART I: Adding to Last Year’s Insights Finding 1—COBOL Applications where high security for confidential finan- Show Higher Security Scores cial information is mandated. These scores The distributi- should not be surprising since COBOL ap- on of security The distribution of Security scores across plications run in mainframe environments scores sug- the current Appmarq sample is presented where they are not as exposed to the security gests some in Figure 2. The bi-modal distribution of challenges of the internet. In addition, these industry seg- Security scores indicates that applications are typically the oldest applications in our can be grouped into two distinct types: one sample and have likely undergone more ex- ments pay group that has very high scores and a sec- tensive remediation for security vulnerabili- more attention ond group with moderate scores and a long ties over time. to security tail toward poor scores. The distribution of Security scores is wider than for any of The lower Security scores for other types of the other quality characteristics, suggesting applications are surprising. In particular, strong variations in the attention paid to se- .NET applications received some of the low- curity among different types of applications est Security scores. These data suggest that or industry segments. attention to security may be focused primar- ily on applications governed by regulatory Further analysis on the data presented in compliance or protection of financial data, Figure 3 revealed that applications with while less attention is paid to security in higher Security scores continue to be pre- other types of applications. dominantly large COBOL applications in the financial services and insurance sectors Figure 2. Distribution of Security Scores 80 60 Frequency 40 20 0 1.0 2.0 3.0 4.0 Security Scores 4
  • 7. The CRASH Report - 2011/12 • Summary of Key Findings Figure 3. Security Scores by Technology Low 4.0 Risk Max 3.0 Security Score 75th Percentile Median 25th Percentile Min 2.0 High 1.0 Risk .NET C C++ COBOL Java EE Oracle Oracle Visual Forms ERP Basic Technology Figure 4. Distribution of Performance Scores 80 60 Frequency 40 20 0 1.0 2.0 3.0 4.0 Peformance Scores 5
  • 8. The CRASH Report - 2011/12 • Summary of Key Findings Finding 2—Performance Scores scores than other languages. Modern devel- Lower in Java-EE opment languages such as Java-EE are gen- erally more flexible and allow developers to As displayed in Figure 4, Performance scores create dynamic constructs that can be riskier were widely distributed, and in general are in operation. This flexibility is an advantage skewed with the highest concentration to- that has encouraged their adoption, but can wards better performance. These data were also be a drawback that results in less pre- produced through software analysis and do dictable system behavior. In addition, de- not constitute a dynamic analysis of an ap- velopers who have mastered Java-EE may plication’s behavior or actual performance in still have misunderstandings about how it use. These scores reflect detection of viola- interacts with other technologies or frame- tions of good architectural or coding prac- works in the application such as Hibernate tices that may have performance implica- or Struts. Generally, low scores on a qual- tions in operation, such as the existence of ity characteristic often reflect not merely expensive calls in loops that operate on large the coding within a technology, but also the data tables. subtleties of how language constructs inter- act with other technology frameworks in the Further analysis of the data presented in application and therefore violate good archi- Figure 5 revealed that Java-EE applications tectural and coding practices. received significantly lower Performance Figure 5. Performance Scores by Technology Low 4.0 Risk 3.5 Performance Score 3.0 2.5 2.0 High Risk 1.5 .NET C C++ COBOL Java EE Oracle Oracle Visual Forms ERP Basic Technology 6
  • 9. The CRASH Report - 2011/12 • Summary of Key Findings Finding 3—Modularity Tempers applications is that COBOL was designed the Effect of Size on Quality long before the strong focus on modularity A negative in software design. Consequently, COBOL correlation Appmarq data contradicts the common be- applications are constructed with many large between size lief that the quality of an application neces- and complex components. More recent lan- and quality sarily degrades as it grows larger. Across the guages encourage modularity and other tech- is evident for full Appmarq sample, the Total Quality In- niques that control the amount of complex- dex (a composite of the five quality charac- ity added as applications grow larger. For COBOL appli- teristic scores) failed to correlate significant- instance, Figure 7 reveals that the percent- cations ly with the size of applications. However, age of highly complex components (compo- after breaking the sample into technology nents with high Cyclomatic Complexity and segments, we found that the Total Quality strong coupling to other components) in Index did correlate negatively with the size COBOL applications is much higher than of COBOL applications as is evident in Fig- in other languages, while this percentage is ure 6, where the data are plotted on a loga- lower for the newer object-oriented technol- rithmic scale to improve the visibility of the ogies like Java-EE and .NET, consistent with correlation. The negative correlation indi- object-oriented principles. However, high cates that variations in the size of COBOL levels of modularity may present a partial applications accounts for 11% of the varia- explanation of the lower Performance scores tion in the Total Quality Index (R2 = .11). in Java-EE applications discussed in Finding 2, as modularity could adversely impact the One explanation for the negative correla- application’s performance. tion between size and quality in COBOL Figure 6. Correlation of Total Quality Index with Size of COBOL Applications 3.5 Total Quality Index Score , 3.0 2.5 10 100 1000 10000 COBOL Application Size (Kilo Lines of Code) 7
  • 10. The CRASH Report - 2011/12 • Summary of Key Findings The increased complexity of components greater size compared with components in in COBOL is consistent with their much other languages. Figure 8 displays the aver- Figure 7. Percentage of Components that are Highly Complex in Applications by Technology 100% Percentage Of Highly Complex 80% Objects in Applications 60% 40% 20% 0% .NET C C++ COBOL Java EE Oracle Oracle Visual Forms ERP Basic Technology Figure 8. Average Object Size Comparison Across Different Technologies 2000 Average Object Size - Lines of Code 1500 1000 500 0 .NET C C++ COBOL Java EE Oracle Oracle Visual Forms ERP Basic Technology 8
  • 11. The CRASH Report - 2011/12 • Summary of Key Findings age number of components per KLOC for Finding 4—Maintainability Low- applications developed in each of the tech- est in Government Applications nologies. While the average component size for most development technologies in the Transferability and Changeability are critical Appmarq repository is between 20 to 50 components of an application’s cost of own- LOC, the average COBOL component is ership, and scores for these quality character- usually well over 600 LOC. istics in the Appmarq sample are presented in Figures 9 and 10. The spread of these Measurements and observations of COBOL distributions suggest different costs of own- applications in the Appmarq repository sug- ership for different segments of this sample. gest that they are structurally different from components developed in other technolo- When Transferability and Changeability gies, both in size and complexity. Conse- scores were compared by industry segment, quently we do not believe that COBOL ap- the results presented in Figure 11 for Trans- plications should be directly benchmarked ferability revealed that scores for govern- against other technologies because compari- ment applications were lower than those sons may be misleading and mask important for other segments. The results for Change- findings related to comparisons among oth- ability were similar, although the differences er, more similar technologies. Although we between government and other industry seg- will continue reporting COBOL with other ments were not as pronounced. This sample technologies in this report, we will identify includes government applications from both any analyses where COBOL applications the United States and European Union. skew the results. Figure 9. Distribution of Transferability Scores 120 80 Frequency 40 0 2.0 3.0 4.0 Transferability Scores 9
  • 12. The CRASH Report - 2011/12 • Summary of Key Findings Figure 10. Distribution of Changeability Scores 80 Frequency 40 0 1.5 2.0 3.0 3.5 4.0 Changeability Scores Figure 11. Transferability Scores by Industry Segment Low 4.0 Risk Transferability Score 3.5 3.0 Energey & Utilities Financial Services Manufacturing IT Consulting 2.5 Government Technology Insurance Telecom Retail High Risk 2.0 Industry 10
  • 13. The CRASH Report - 2011/12 • Summary of Key Findings Although we do not have cost data, these through contracted work, compared to 50% results suggest that government agencies are of the applications in the private sector be- Government spending significantly more of their IT bud- ing obtained through outsourcing. Multiple applications gets on maintaining existing applications contractors working on the same applica- have the most than on creating new functionality. Not tion over time, disincentives in contracts, chronic com- surprisingly, the Gartner 2011 IT Staffing contractors not having to maintain the code plexity profiles & Spending report stated that the govern- at their own cost, and immature acquisi- ment sector spends about 73% of its budget tion practices are potential explanations for on maintenance, higher than any other seg- the lower Transferability and Changeability ment. scores on government applications. Regard- less of the cause, Figure 12 indicates that The lower Transferability and Changeability when COBOL applications are removed scores for government agencies may partially from the sample, government applications result from unique application acquisition have the highest proportion of complex conditions. In the Appmarq sample, 75% components in the Appmarq sample. of government applications were acquired Figure 12. Complexity of Components (Not Including COBOL) 35% Percentage of High Complex Ob- 30% Government jects in Applications 25% Financial Services 20% Energey & Utilities Manufacturing 15% IT Consulting Telecom Insurance 10% Technology Retail 5% 0% Industry Segment 11
  • 14. The CRASH Report - 2011/12 • Summary of Key Findings Compared to Transferability scores, the est Changeability scores since most ABAP Changeability scores exhibited an even wid- code customizes commercial off-the-shelf er distribution indicating that they may be SAP systems. affected by factors other than industry seg- ment. Figure 13 presents Changeability The lowest Changeability scores were seen scores by technology type, and shows ABAP, in applications written in C, a language that COBOL, and Java-EE had higher Change- allows great flexibility in development, but ability scores than other technologies. It is apparently sacrifices ease of modification. not surprising that ABAP achieved the high- Figure 13. Changeability Scores by Technology Low 4.0 Risk 3.5 Changeability Score 3.0 2.5 2.0 High 1.5 Risk .NET ABAP C C++ COBOL Java EE Oracle Oracle Visual Forms ERP Basic Technology 12
  • 15. The CRASH Report - 2011/12 • Summary of Key Findings PART II: New Insights This Year Finding 5—No Structural Qual- ity Difference Due to Sourcing Variations in Figure 14. Total Quality Index Scores or Shoring for Inhouse vs. Outsourced quality are not explained by The Appmarq sample was analyzed based Low Risk 4.0 sourcing mo- on whether applications were managed by Total Quality Index Score del alone inhouse or outsourced resources. A slightly larger proportion of the applications were 3.5 developed by outsourced resources (n=390) compared to inhouse resources (n=355). 3.0 Figure 14 presents data comparing inhouse and outsourced applications, showing no difference between their Total Quality Index 2.5 scores. This finding of no significant differ- ences was also observed for each of the in- High Risk 2.0 dividual quality characteristic scores. One Inhouse Outsourced possible explanation for these findings is that many of the outsourced applications were initially developed inhouse before being out- Figure 15. Total Quality Index Scores sourced for maintenance. Consequently, it is for Onshore vs. Offshore not unexpected that their structural quality characteristics are similar to those whose maintenance remained inhouse. Low Risk 4.0 Total Quality Index Score Similar findings were observed for applica- 3.5 tions developed onshore versus offshore. Most of the applications in the Appmarq sample were developed onshore (n=585) 3.0 even if outsourced. As is evident in Figure 15, no significant differences were detected 2.5 in the Total Quality Index between onshore and offshore applications. There were also no differences observed among each of the High Risk 2.0 Onshore Offshore individual quality characteristic scores. 13
  • 16. The CRASH Report - 2011/12 • Summary of Key Findings Finding 6—Development Meth- for applications that reported using waterfall ods Affect Structural Quality methods are higher than those using agile/ Quality lowest iterative methods, as displayed in Figures with custom The five quality characteristics were analyzed 16b and 16c. This trend was stronger for development for differences between the development Changeability than for Transferability. In methods, method used on each of the applications. both cases, the trend for a mix of agile and and Waterfall For the 204 applications that reported their waterfall methods was closer to the trend for development method, the most frequently agile than to the trend for waterfall. scores highest reported methods fell into four categories: in Changeabili- agile/iterative methods (n=63), waterfall It appears from these data that applications ty and Transfe- (n=54), agile/waterfall mix (n=40), and cus- developed with agile methods are nearly as rability tom methods developed for each project effective as waterfall at managing the struc- (n=47). As is evident in Figure 16a, scores tural quality affecting business risk (Robust- for the Total Quality Index were lowest for ness, Performance, and Security), but less so applications developed using custom meth- at managing the structural quality factors af- ods rather than relying on a more established fecting cost (Transferability and Changeabil- method. Similar trends were observed for all ity). The agile methods community refers of the quality characteristics except Transfer- to structural quality as managing Technical ability. Debt, a topic we will discuss in Part III. The Transferability and Changeability scores Figure 16a. Total Quality Index Figure 16b. Transferability Scores Figure 16c. Changeability Scores Scores by Development Methods by Development Methods by Development Methods Low Risk 3.5 3.5 3.5 Total Quality Index Score Transferability Score Changeability Score 3.0 3.0 3.0 2.5 2.5 2.5 Agile/Waterfall Agile/Waterfall Agile/Waterfall Agile/Iterative Agile/Iterative Agile/Iterative Waterfall Waterfall Waterfall Custom Custom Custom High 2.0 2.0 2.0 Risk Development Methods 14
  • 17. The CRASH Report - 2011/12 • Summary of Key Findings Finding 7— Scores Decline with More Frequent Releases The five quality characteristics were analyzed based on the number of releases per year for Figure 17a. Robustness Scores by each of the applications. The 319 applica- Number of Releases per Year tions that reported the number of releases per year were grouped into three categories: Low 4.0 Risk one to three releases (n=140), four to six re- leases (n=114), and more than six releases (n=59). As shown in Figure 17a, 17b, and Robustness Score 17c, scores for Robustness, Security, and 3.0 Changeability declined as the number of re- leases grew, with the trend most pronounced for Security. Similar trends were not ob- 2.0 served for Performance and Transferability. In this sample most of the applications with six or more releases per year were reported to High have been developed using custom methods, 1.0 Risk 1 to 3 3 to 6 More than and the sharp decline for projects with more per year 6 per year per year than six releases per year may be due in part Major Releases per Year to less effective development methods. Figure 17b. Security Scores by Num- Figure 17c. Changeability Scores by ber of Releases per Year Number of Releases per Year Low Low 4.0 4.0 Risk Risk Changeability Score 3.0 Security Score 3.0 2.0 2.0 High High 1.0 1.0 Risk Risk 1 to 3 More than 1 to 3 3 to 6 More than 3 to 6 per year per year 6 per year per year per year 6 per year Major Releases per Year Major Releases per Year 15
  • 18. The CRASH Report - 2011/12 • Summary of Key Findings Finding 8—Security Scores of the IT consulting data indicated that the Lowest in IT Consulting lower Security scores were primarily char- acteristic of applications that had been out- As is evident in Figure 18, Security scores are sourced to them by customers. In essence, lower in IT consulting than in other indus- IT consulting companies were receiving ap- try segments. These results did not appear to plications from their customers for mainte- be caused by technology, since IT consulting nance that already contained significantly displayed one of the widest distributions of more violations of good security practices. technologies in the sample. Deeper analysis Figure 18. Security Scores by Industry Segment Low Risk 4.0 Security Score 3.0 Energy & Utilities Financial Services 2.0 Manufacturing IT Consulting Governement Technology Insurance Telecom Retail High 1.0 Risk Industry Segment 16
  • 19. The CRASH Report - 2011/12 • Summary of Key Findings Finding 9—Maintainability De- as the number of users grew. Similar trends clines with Number of Users were not observed for Robustness, Perfor- mance, or Security. A possible explanation The five quality characteristics were analyzed for these trends is that applications with a to detect differences based on the number of higher number of users are subject to more users for each of the 207 applications that re- frequent modifications, putting a premium ported usage data. Usage levels were grouped on Transferability and Changeability for into 500 or less (n=38), 501 to 1000 (n=43), rapid turnaround of requests for defect fixes 1001 to 5000 (n=26), and greater than 5000 or enhancements. Also, the most mission (n=100). Figures 19a and 19b show scores critical applications rely on most rigid (wa- for Transferability and Changeability rose terfall like) processes. Figure 19a. Transferability by Number Figure 19b. Changeability by Number of Application Users of Application Users Low Low 4.0 4.0 Risk Risk 3.5 3.5 Transferability Score Changeability Score 3.0 3.0 2.5 2.5 2.0 2.0 High High 1.5 1.5 Risk Risk 500 or 501 to 1001 to Greater 500 or 501 to 1001 to Greater Less 1000 5000 than 5000 Less 1000 5000 than 5000 Number of End Users Number of End Users 17
  • 20. The CRASH Report - 2011/12 • Summary of Key Findings PART III: Technical Debt Finding 10—Average $3.61 of remediated at each level of severity, the time Technical Debt per LOC required to fix a violation, and the burdened This report hourly rate for a developer. This formula for takes a very Technical Debt represents the effort re- calculating the Technical Debt of an applica- conservative quired to fix problems that remain in the tion is presented on the following page. approach to code when an application is released. Since quantifying it is an emerging concept, there is little ref- To evaluate the average Technical Debt erence data regarding the Technical Debt in across the Appmarq sample, we first calcu- Technical Debt a typical application. The CAST Appmarq lated the Technical Debt per line of code for benchmarking repository provides a unique each of the individual applications. These opportunity to calculate Technical Debt individual application scores were then aver- across different technologies, based on the aged across the Appmarq sample to produce number of engineering flaws and violations an average Technical Debt of $3.61 per line of good architectural and coding practices in of code. Consequently, a typical application the source code. These results can provide a accrues $361,000 of Technical Debt for each frame of reference for the application devel- 100,000 LOC, and applications of 300,000 opment and maintenance community. or more LOC carry more than $1 million of Technical Debt ($1,083,000). The cost of Since IT organizations will not have the fixing Technical Debt is a primary contribu- time or resources to fix every problem in the tor to an application’s cost of ownership, and source code, we calculate Technical Debt as a significant driver of the high cost of IT. a declining proportion of violations based on their severity. In our method, at least This year’s Technical Debt figure of $3.61 is half of the high severity violations will be larger than the 2010 figure of $2.82. How- prioritized for remediation, while only a ever, this difference cannot be interpreted small proportion of the low severity viola- as growth of Technical Debt by nearly one tions will be remediated. We developed a third over the past year. This difference is parameterized formula for calculating the at least in part, and quite probably in large Technical Debt of an application with very part, a result of a change in the mix of appli- conservative assumptions about parameter cations included in the current sample. values such as the percent of violations to be 18
  • 21. The CRASH Report - 2011/12 • Summary of Key Findings Technical Debt Calculation Our approach for calculating Technical Debt is defined below: 1. The density of coding violations per thousand lines of code (KLOC) is derived from source code analysis using the CAST Ap- plication Intelligence Platform. The coding violations highlight is- sues around Security, Performance, Robustness, Transferability, and Changeability of the code. 2. Coding violations are categorized into low, medium, and high severity violations. In developing the estimate of Technical Debt, it is assumed that only 50% of high severity problems, 25% of moderate severity problems, and 10% of low severity problems will ultimately be corrected in the normal course of operating the application. 3. To be conservative, we assume that low, moderate, and high se- verity problems would each take one hour to fix, although industry data suggest these numbers should be higher and in many cases is much higher, especially when the fix is applied during operati- on. We assumed developer cost at an average burdened rate of $75 per hour. 4. Technical Debt is therefore calculated using the following formula: Technical Debt = (10% of Low Severity Violations + 25% of Medi- um Severity Violations + 50% of High Severity Violations) * No. of Hours to Fix * Cost/Hr. 19
  • 22. The CRASH Report - 2011/12 • Summary of Key Findings Finding 11— Majority of Tech- to the three characteristics associated with nical Debt Impacts Cost and risk (Robustness, Performance, and Securi- Only one third Adaptability ty) is much lower in C, C++, COBOL, and of Technical Oracle ERP. Technical Debt related to Ro- Debt carries Figure 20 displays the amount of Technical bustness is proportionately higher in ABAP, immediate Debt attributed to violations that affect each Oracle Forms, and Visual Basic. business risks of the quality characteristics. Seventy per- cent of the Technical Debt was attributed to violations that affect IT cost: Transferability Figure 20. Technical Debt by Quali- and Changeability. The other thirty percent ty Characteristics for the Complete involved violations that affect risks to the Appmarq Sample business: Robustness, Performance, and Se- curity. Similar to the findings in the complete sam- Robustness Changeability 18% ple, in each of the technology platforms the 30% Performance 5% cost factors of Transferability and Change- Security ability accounted for the largest proportion 7% of Technical Debt. This trend is shown in Transferability Figure 21, which displays the spread of Tech- 40% nical Debt across quality characteristics for each language. However, it is notable that the proportion of Technical Debt attributed Figure 21. Technical Debt by Quality Characteristics for Each Language 9% 13% 16% 36% 34% 35% 38% Changeability 44% 44% 48% Transferability 47% Security 30% 45% 41% 63% 47% 34% 2% 0% 1% Performance 7% 40% 9% 6% 3% Robustness 42% 3% 8% 4% 5% 1% 3% 3% 23% 7% 17% 13% 2% 17% 12% 32% 1% 0% 7% 8% .NET ABAP C C++ COBOL Java EE Oracle Oracle Visual Forms ERP Basic Technology 20
  • 23. The CRASH Report - 2011/12 • Summary of Key Findings Finding 12—Technical Debt is During the past two years we have chosen Highest in Java-EE parameter values based on the previously described conservative assumptions. In the Technical Debt was analyzed within each of future, we anticipate changing these values the development technologies. As shown in based on more accurate industry data on Figure 22, Java-EE had the highest Techni- average time to fix violations and strategies cal Debt scores, averaging $5.42 per LOC. for determining which violations to fix. The Java-EE also had the widest distribution of Technical Debt results presented in this re- Technical Debt scores, although scores for port are suggestive of industry trends based .NET and Oracle Forms were also widely on the assumptions in our parameter values distributed. COBOL and ABAP had some and calculations. Although different as- of the lowest Technical Debt scores. sumptions about the values to set for param- eters in our equations would produce dif- ferent cost results, the relative comparisons Future Technical Debt Analyses within these data would not change, nor would the fundamental message that Tech- The parameters used in calculating Technical nical Debt is large and must be systemati- Debt can vary across applications, compa- cally addressed to reduce application costs, nies, and locations based on factors such as risks, and adaptability. labor rates and development environments. Figure 22. Technical Debt within Each Technology $15,000 Technical Debt ($/KLOC) $10,000 $5.000 $0 .NET ABAP C C++ COBOL Java EE Oracle Visual Forms Basic Technology 21
  • 24. The CRASH Report - 2011/12 • Summary of Key Findings Concluding Comments The findings in this report establish differ- establish annual trends and may ultimately ences in the structural quality of applica- be able to do this within industry segments tions based on differences in development and technology groups. Appmarq is a technology, industry segment, number of benchmark repository with growing capabil- users, development method, and frequency ities that will allow the depth and quality of of release. However, contrary to expecta- our analysis and measurement of structural tions, differences in structural quality were quality to improve each year. not related to the size of the application, whether its development was onshore or The observations from these data suggest offshore, and whether its team was internal that development organizations are focused or outsourced. These results help us better most heavily on Performance and Security understand the factors that affect structural in certain critical applications. Less atten- quality and bust myths that lead to incorrect tion appears to be focused on removing the conclusions about the causes of structural Transferability and Changeability problems problems. that increase the cost of ownership and reduce responsiveness to business needs. These data also allows us to put actual num- These results suggest that application devel- bers to the growing discussion of Technical opers are still mostly in reaction mode to the Debt—a discussion that has suffered from business rather than being proactive in ad- a dearth of empirical evidence. While we dressing the long term causes of IT costs and make no claim that the Technical Debt fig- geriatric applications. ures in this report are definitive because of the assumptions underlying our calculations, Finally, the data and findings in this report we are satisfied that these results provide a are representative of the insights that can strong foundation for continuing discussion be gleaned by organizations who establish and the development of more comprehen- their own Application Intelligence Centers sive quantitative models. to collect and analyze structural quality data. Such data provide a natural focus for the ap- We strongly caution against interpreting plication of statistical quality management year-on-year trends in these data due to and lean techniques. The benchmarks and changes in the mix of applications making insights gained from such analyses provide up the sample. As the Appmarq repository excellent input for executive governance grows and the proportional mix of applica- over the cost and risk of IT applications. tions stabilizes, with time we will be able to 22
  • 25. CAST Research Labs CAST Research Labs (CRL) was established to further the empirical study of software implementation in business technology. Starting in 2007, CRL has been collecting metrics and structural characteristics from custom applications deployed by large, IT-intensive enterprises across North America, Europe and India. This unique dataset, currently standing at approximately 745 applications, forms a basis to analyze actual software implementation in industry. CRL focuses on the scientific analysis of large software applications to discover insights that can improve their structural quality. CRL provides practical advice and annual benchmarks to the global application de- velopment community, as well as interacting with the academic community and con- tributing to the scientific literature. As a baseline, each year CRL will be publishing a detailed report of software trends found in our industry repository. The executive summary of the report can be down- loaded free of charge by clicking on the link below. The full report can be purchased by contacting the CAST Information Center at +1 (877) 852 2278 or visit: http://research.castsoftware.com. Authors Jay Sappidi, Sr. Director, CAST Research Labs Dr. Bill Curtis, Senior Vice President and Chief Scientist, CAST Research Labs Alexandra Szynkarski, Research Associate, CAST Research Labs For more information, please visit research.castsoftware.com