Research
Publication Date: 29 June 2007                                                               ID Number: G00147982...
TABLE OF CONTENTS

Analysis .................................................................................................
LIST OF TABLES

Table 1. Hype Cycle Phases ..................................................................................
ANALYSIS

What You Need to Know
Technology and governance advances are improving the speed and quality of software deliver...
discipline and effective planning techniques that lead to predictable results. Although individually
incremental, the conv...
On the Rise
Data Service Architectures
Analysis By: Mark Beyer
Definition: Data services consist of processing routines th...
Market Penetration: Five percent to 20% of target audience
Maturity: Emerging
Sample Vendors: Ab Initio; Business Objects;...
Business Impact: Tighter integration between business process changes and IT systems
change. Business units and users will...
technologies than most companies are used to. It is the inevitable outcome of decoupling
application logic from data manag...
Recommended Reading: "Security as Engineering Discipline: The SSE-CMM's Objectives,
Principles and Rate of Adoption"

SOA ...
Position and Adoption Speed Justification: Application delivery globalization — in which
applications are built and mainta...
User Advice: End-user clients should resist vendor claims that their products "do" EIM. EIM is
not a technology market. Cl...
quality and standards compliance. Project-management-office-centric tools will also include
information from operational a...
be secondary to data modeling, object modeling and process modeling. Businesses have always
been real-time, event-driven s...
support of focus areas like application development, data architecture, data warehousing and
enterprise architecture (EA)....
Recommended Reading: "Are Federated Metadata Approaches to Business Service
Repositories Valid?"
"Best Practices for Metad...
Business Impact: A user experience that is perceptively better than other offerings in a product
category can provide sust...
•    The rise in the use of external providers for application development — especially Indian
           offshore provide...
evaluate the efficacy of changing the way application testing is performed. These consulting-led
projects are often descri...
SOA registries and repositories help manage metadata related to SOA artifacts (for example,
services, policies, processes ...
Recommended Reading: "Criteria for Evaluating a Vendor's SOA Governance Strategy"
"No 'Leader' Exists in SOA Governance … ...
the software, as opposed to testing quality in, as well as to move toward automated testing
process and environments.
Bene...
investment or agility for time- and/or budget-constrained application development projects. The
ideal solution is to mix A...
further reducing overall application costs). However, expectations should be managed.
Organizations still need qualified t...
for a subsection of the application portfolio, so they should be coupled with ARAD and other rapid
development tools as pa...
in your industry and any related tool capabilities, such as support for an industry-specific
architecture framework; and t...
Business Impact: Enterprises adopting application security testing technologies and processes
will benefit from risk and c...
Hype Cycle for Application Development, 2007
Hype Cycle for Application Development, 2007
Hype Cycle for Application Development, 2007
Hype Cycle for Application Development, 2007
Hype Cycle for Application Development, 2007
Hype Cycle for Application Development, 2007
Hype Cycle for Application Development, 2007
Hype Cycle for Application Development, 2007
Hype Cycle for Application Development, 2007
Hype Cycle for Application Development, 2007
Hype Cycle for Application Development, 2007
Hype Cycle for Application Development, 2007
Hype Cycle for Application Development, 2007
Hype Cycle for Application Development, 2007
Hype Cycle for Application Development, 2007
Hype Cycle for Application Development, 2007
Hype Cycle for Application Development, 2007
Hype Cycle for Application Development, 2007
Hype Cycle for Application Development, 2007
Hype Cycle for Application Development, 2007
Upcoming SlideShare
Loading in...5
×

Hype Cycle for Application Development, 2007

919

Published on

0 Comments
1 Like
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total Views
919
On Slideshare
0
From Embeds
0
Number of Embeds
0
Actions
Shares
0
Downloads
21
Comments
0
Likes
1
Embeds 0
No embeds

No notes for slide

Hype Cycle for Application Development, 2007

  1. 1. Research Publication Date: 29 June 2007 ID Number: G00147982 Hype Cycle for Application Development, 2007 Jim Duggan, Daniel B. Stang, Partha Iyengar, Thomas E. Murphy, Allie Young, David Norton, Mark Driver, L. Frank Kenney, Greta A. James, Mark A. Beyer, Roy W. Schulte, Yefim V. Natis, David Gootzit, Frances Karamouzis, Lorrie Scardino, Michael J. Blechar, David Newman, Joseph Feiman, Neil MacDonald, Donald Feinberg, Ray Valdes, Matt Light, David W. Cearley, David W. McCoy, Jess Thompson A shift to process and service orientation is altering staffing, tools and methods of software development. In parallel, governance, planning, control and quality assurance techniques are being refined and strengthened to drive more predictability and meet the challenges of global sourcing. © 2007 Gartner, Inc. and/or its Affiliates. All Rights Reserved. Reproduction and distribution of this publication in any form without prior written permission is forbidden. The information contained herein has been obtained from sources believed to be reliable. Gartner disclaims all warranties as to the accuracy, completeness or adequacy of such information. Although Gartner's research may discuss legal issues related to the information technology business, Gartner does not provide legal advice or services and its research should not be construed or used as such. Gartner shall have no liability for errors, omissions or inadequacies in the information contained herein or for interpretations thereof. The opinions expressed herein are subject to change without notice.
  2. 2. TABLE OF CONTENTS Analysis ............................................................................................................................................. 4 What You Need to Know ...................................................................................................... 4 The Hype Cycle .................................................................................................................... 4 The Priority Matrix ................................................................................................................ 4 On the Rise........................................................................................................................... 6 Data Service Architectures ...................................................................................... 6 Metadata Ontology Management ............................................................................ 7 Information-Centric Infrastructure............................................................................ 8 SDLC Security Methodologies ................................................................................ 9 SOA Testing .......................................................................................................... 10 Collaborative Tools for the Software Development Life Cycle .............................. 10 Enterprise Information Management ..................................................................... 11 Application Quality Dashboards ............................................................................ 12 Event-Driven Architecture...................................................................................... 13 Metadata Repositories........................................................................................... 14 RIA Platforms ........................................................................................................ 16 At the Peak ......................................................................................................................... 17 Application Testing Services ................................................................................. 17 SOA Governance Technologies............................................................................ 19 Globally Sourced Testing ...................................................................................... 21 Model-Driven Architectures ................................................................................... 22 Scriptless Testing .................................................................................................. 23 Architected, Model-Driven SODA.......................................................................... 24 Enterprise Architecture Tools ................................................................................ 25 Application Security Testing .................................................................................. 26 Sliding Into the Trough ....................................................................................................... 27 Project and Portfolio Management ........................................................................ 27 Business Application Package Testing ................................................................. 28 Agile Development Methodology........................................................................... 29 Unit Testing ........................................................................................................... 30 ARAD SODA ......................................................................................................... 31 SOA ....................................................................................................................... 32 Climbing the Slope ............................................................................................................. 33 Enterprise Software Change and Configuration Management.............................. 33 Enterprise Portals .................................................................................................. 33 Microsoft .NET Application Platform...................................................................... 34 OOA&D Methodologies ......................................................................................... 36 Linux as a Mission-Critical DBMS Platform........................................................... 37 Performance Testing ............................................................................................. 38 Open-Source Development Tools ......................................................................... 38 Business Process Analysis.................................................................................... 39 Entering the Plateau ........................................................................................................... 40 Automated Testing ................................................................................................ 40 Java Platform, Enterprise Edition .......................................................................... 41 Appendices ......................................................................................................................... 43 Hype Cycle Phases, Benefit Ratings and Maturity Levels .................................... 45 Recommended Reading.................................................................................................................. 46 Publication Date: 29 June 2007/ID Number: G00147982 Page 2 of 47 © 2007 Gartner, Inc. and/or its Affiliates. All Rights Reserved.
  3. 3. LIST OF TABLES Table 1. Hype Cycle Phases ........................................................................................................... 45 Table 2. Benefit Ratings .................................................................................................................. 45 Table 3. Maturity Levels .................................................................................................................. 46 LIST OF FIGURES Figure 1. Hype Cycle for Application Development, 2007................................................................. 4 Figure 2. Matrix for Application Development, 2007 ......................................................................... 5 Figure 3. Hype Cycle for Application Development, 2006............................................................... 43 Publication Date: 29 June 2007/ID Number: G00147982 Page 3 of 47 © 2007 Gartner, Inc. and/or its Affiliates. All Rights Reserved.
  4. 4. ANALYSIS What You Need to Know Technology and governance advances are improving the speed and quality of software delivery and the business utility of the products. Service orientation is becoming the most common architectural approach. Techniques and tools to improve the planning, measurement, control and reporting of application development and delivery activities are advancing quickly. The Hype Cycle Application development activities are changing in two ways: 1) process and service orientation is altering the staffing, tooling and methods being used to carry software from business need to production code, and 2) governance, planning, control and quality assurance (QA) techniques are being refined and strengthened to drive more predictability and to meet the challenges of global sourcing. Figure 1. Hype Cycle for Application Development, 2007 visibility Scriptless Testing Architected, Model-Driven SODA Model-Driven Architectures Enterprise Architecture Tools Globally Sourced Testing SOA Governance Technologies Application Security Testing Application Testing Services RIA Platforms Metadata Repositories Java Platform, Enterprise Edition Event-Driven Architecture Automated Testing Application Quality Project and Portfolio Management Open-Source Dashboards Development Tools Enterprise Information Business Process Analysis Management Business Application Agile Package Testing Performance Testing Collaborative Tools Development Methodology Linux as a Mission-Critical DBMS Platform for the Software Development Life Cycle Unit Testing SOA OOA&D Methodologies SOA Testing Microsoft .NET Application Platform SDLC Security Enterprise Portals Methodologies ARAD SODA Enterprise Software Change and Information-Centric Configuration Management Infrastructure Metadata Ontology Management Data Service Architectures As of June 2007 Peak of Technology Trough of Plateau of Inflated Slope of Enlightenment Trigger Disillusionment Productivity Expectations time Years to mainstream adoption: obsolete less than 2 years 2 to 5 years 5 to 10 years more than 10 years before plateau Source: Gartner (June 2007) The Priority Matrix Financial effectiveness has become a major issue for many application groups. They are seeking more-formal processes to help achieve the goal of running IT as a business, with budget Publication Date: 29 June 2007/ID Number: G00147982 Page 4 of 47 © 2007 Gartner, Inc. and/or its Affiliates. All Rights Reserved.
  5. 5. discipline and effective planning techniques that lead to predictable results. Although individually incremental, the convergence of changes in the governance, planning and control techniques are transformative when taken across all topic areas. Service-oriented architecture (SOA) leads to service-oriented development and requires substantial changes in staffing, tooling and practice throughout development organizations. Business process management (BPM) techniques move companies in the same direction. Ultimately, the distinctions between service-oriented development of applications (SODA) and BPM will narrow, as both marginalize the distinctions between development time and runtime systems and processes. Figure 2. Matrix for Application Development, 2007 benefit years to mainstream adoption less than 2 years 2 to 5 years 5 to 10 years more than 10 years transformational Enterprise Portals Data Service Event-Driven Architecture Architectures Information-Centric SOA Infrastructure SOA Testing high ARAD SODA Application Quality Enterprise Information Dashboards Management Automated Testing Application Security Model-Driven Business Application Testing Architectures Package Testing Architected, Model-Driven Scriptless Testing Java Enterprise Edition SODA Linux as a Mission- Collaborative Tools for the Critical DBMS Platform Software Development Life Cycle Enterprise Architecture Tools Globally Sourced Testing Metadata Repositories Microsoft .NET Application Platform Project and Portfolio Management RIA Platforms SOA Governance Technologies moderate Business Process Application Testing Agile Development Analysis Services Methodology OOA&D Methodologies Enterprise Software Metadata Ontology Change and Configuration Management Performance Testing Management SDLC Security Unit Testing Open-Source Methodologies Development Tools low As of June 2007 Source: Gartner (June 2007) Publication Date: 29 June 2007/ID Number: G00147982 Page 5 of 47 © 2007 Gartner, Inc. and/or its Affiliates. All Rights Reserved.
  6. 6. On the Rise Data Service Architectures Analysis By: Mark Beyer Definition: Data services consist of processing routines that provide direct data manipulation pertaining to the delivery, transformation and the logical and semantic reconciliation of data. Unlike point-to-point data integration solutions, data services de-couple data storage, security and mode of delivery from each other, as well as from individual applications, to deliver them as independently designed and deployed functionality that can be connected via a registry or composite processing framework. Data services can be used in a networked fashion that is orchestrated through a composite processing model or designed separately, then reused in various, larger-grained processes. Position and Adoption Speed Justification: Data services are, by their nature, a new style of data access strategy that replaces the data management, access and storage duties currently deployed in an application-specific manner. Data services architecture is merely a sub-class or category of SOA that does not form a new architecture, but brings emphasis to the varying services that exist within SOA. Most of the large vendors have announced road maps and plans to pursue some variant of the data service approach, but this is an evolutionary architectural style that does not warrant "rip and replace" at this time and will coexist with current application design techniques. Disillusionment will occur as organizations realize the granularity required to deploy this type of architecture, especially relative to the differences between handling data via a business operational process vs. data handling via industry delivery concepts. User Advice: Users should focus on delivering a semantic layer that portrays the use of data and information in the organization and, at the same time, begin developing a logical business model. The logical and semantic model should be interpreted to the physical repositories throughout the organization — creating a physical-to-logical-model reconciliation. In 2006, this technology class was specifically focused on information in the former "structured" data class only. In 2007, initial advances in using model-to-model (M2M) language communication via metadata operators are blended into this technology. The M2M introduction caused a temporary retrograde in the technology position and at the same time will accelerate its movement along the cycle. Existing data integration vendors: extraction, transformation and loading (ETL), enterprise integration information (EII) and enterprise application integration, have begun to pursue common metadata repositories used as a core library to deploy all data delivery modes but have not built machine intelligence into optimization strategies. Organizations should eschew vendor development platforms that deny or refute the requirement for interoperability. Business Impact: Data services are not an excuse for each organization to write its own, unique database management system (DBMS), as most DBMSs both store data and provide ready access. Data services can sever the tight links between application interface development and the more infrastructure-style decisions of database platforms, operating systems (OSs) and hardware. Specifically, the metadata interpretation between business process models, semantic usage models and logical/physical data models will enhance the overall adaptiveness of IT solutions. This will create a portability of applications to lower-cost repository environments when appropriate and create a direct corollary between the cost of information management and the value of the information delivered by delivering semantically consistent data and information to any available presentation format. This is opposed to the current scenario in which monolithic application design can drive infrastructure costs up because of their dependence on specific platform or DBMS capabilities. Benefit Rating: Transformational Publication Date: 29 June 2007/ID Number: G00147982 Page 6 of 47 © 2007 Gartner, Inc. and/or its Affiliates. All Rights Reserved.
  7. 7. Market Penetration: Five percent to 20% of target audience Maturity: Emerging Sample Vendors: Ab Initio; Business Objects; IBM; Informatica; Oracle Recommended Reading: "The Emerging Vision for Data Services: Becoming Information- Centric in an SOA World" "Data Integration Is Key to Successful Service-Oriented Architecture Implementations" "Service-Oriented Business Applications Require EIM Strategy" Metadata Ontology Management Analysis By: Mark Beyer; Michael Blechar Definition: Metadata ontology management addresses the problem of information assets created by different processes, defined by different business terms and interpreted through disparate semantics, to produce competing taxonomies. Ontology management recognizes that simultaneous metadata descriptions can exist for each information asset and proceeds to reconcile them. The various metadata sources include business process modeling, EII, ETL, metadata repository technologies and others. Ontology management allows business analysts to leverage the value of these assets better, while promoting improved understanding across business units and IT management personnel. Position and Adoption Speed Justification: Business organizations are just embarking on the use of metadata in determining the value of data points and information delivered through information technology systems. One high business value use of metadata is found in the ability to justify and identify how decisions were made, based on information available at any given time. The new demand for metadata that describes end-user interpretations of "fact" will force the introduction of annotation metadata in daily workflows. Presently, most metadata management functionality is a feature of existing metadata tools limited to model extension with end-user defined columns and metadata versioning with no workflow or administrative enforcement beyond the development team. With the advent of SOAs and the active use of metadata to control services flow, it will become imperative that the business becomes involved in linking the BPM workflows with information management workflows. This will force new metadata management tools development with a radically different business user interface. User Advice: 1. Identify data management and integration tools that include metadata repository management interfaces, supporting metadata model extensions. 2. Identify data management and integration tools that expose metadata repositories via application programming interfaces (APIs) and service calls, versus metadata import/export functionality only. 3. Acclimate business personnel to their role in creating information assets and the importance of metadata as a precursor to introducing these practices. 4. Initiate a data administration task to capture various business ontologies of integrated information resources with the understanding that ontology evolves continuously. Publication Date: 29 June 2007/ID Number: G00147982 Page 7 of 47 © 2007 Gartner, Inc. and/or its Affiliates. All Rights Reserved.
  8. 8. Business Impact: Tighter integration between business process changes and IT systems change. Business units and users will be able to relay their concerns better regarding use of information assets throughout the organization. Better assessment by business analysts on the risks and benefits that accrue in the business regarding maintenance and security of information assets. Benefit Rating: Moderate Market Penetration: Less than 1% of target audience Maturity: Embryonic Information-Centric Infrastructure Analysis By: David Newman Definition: Information-centric infrastructure (ICI) is a technology framework that enables information producers and information consumers to organize, share and exchange any content (structured and unstructured data, for example), anytime, anywhere. It is the technology building block within an organization's enterprise information management (EIM) program. Since different systems use different formats and different standards to share and exchange different types of information, the technologies that make up an ICI ensure that common processes applied to common content will produce similar results. Position and Adoption Speed Justification: The vision for an ICI will be adopted by organizations seeking to bring a greater balance to their integration activities to address cost and complexity issues associated with silo-based, application-centric development. One of the reasons organizations cannot respond as quickly as market conditions dictate is because much of the information has been isolated within applications — each fulfilling its own unique (process- driven) requirements. As demands for access to information sources increase, organizations will use an ICI as their technical foundation to facilitate the convergence of different types of content required by industry "ecosystems" and trade exchanges. This will help resolve issues around info- glut, and will improve application integration capabilities during migration toward SOAs. User Advice: Recognize that different project teams use different applications, formats and standards to exchange information. Look for common ways to normalize and extract meaning from all types of content so that it can be exchanged across the organization. Use existing system analysis and designs as starting points to develop common models, which can then be shared by different processing components and system entities. Use existing methods of content-centric processing to identify gaps that need to be filled to support ICI requirements. For instance, determine the usefulness of the Federal Enterprise Architecture Framework Data Reference Model (version 2.0) to your industry — regardless of whether you are a commercial or government organization. Exploit the use of emerging standards (such as XML), or data and metadata interchange and create a common components library of metadata objects based on corporate standards, thereby promoting wide-scale reuse. Business Impact: An ICI brings balance to many application-driven environments because it "normalizes" the chaos caused by having different and diverse standards, formats and protocols. It extracts meaning and delivers context so that each content instance can be shared and exchanged to support a variety of business process needs by identifying, abstracting and rationalizing commonalities across content; applying semantics for information exchange and interoperability; and implementing metadata management for discovery, reuse and repurpose. Organizations failing to invest in building out an ICI by 2015 will experience a 30% increase in overhead costs to manage their IT operations. An ICI will make far greater use of emerging Publication Date: 29 June 2007/ID Number: G00147982 Page 8 of 47 © 2007 Gartner, Inc. and/or its Affiliates. All Rights Reserved.
  9. 9. technologies than most companies are used to. It is the inevitable outcome of decoupling application logic from data management requirements (as seen in SOA). Benefit Rating: Transformational Market Penetration: Less than 1% of target audience Maturity: Emerging Recommended Reading: "Key Issues for Information-Centric Infrastructures, 2007" "Gartner Defines the Information-Centric Infrastructure" "Information-Centric Infrastructure: Application Integration Via Content" "Predicts 2007: Information Infrastructure Emerges" SDLC Security Methodologies Analysis By: Joseph Feiman Definition: Software development life cycle (SDLC) methodologies are based on the principle that security implementation shouldn't be an isolated process, but rather part of a comprehensive software engineering process. The methodologies should make security engineering a measurable, repeatable, predictable and controlled discipline. They also will enable detection, correction and prevention of application vulnerabilities. Position and Adoption Speed Justification: We expect that the adoption of SDLC security methodologies will follow the standard pattern of methodologies, such as the Capability Maturity Model. Only a few organizations (primarily external service providers — ESPs) will reach the highest level of maturity, while most will remain at the lower levels. Achieving lower levels of maturity will take approximately 18 months or more; in addition, it will take the same amount of time to move up to each of the higher levels. User Advice: To enable the detection, correction and prevention of security vulnerabilities in applications, ESPs should consider formally adopting SDLC security methodologies to confirm their adherence to practices that embed security into systems and software engineering. Formal adoption may also prove to potential clients that ESPs are competent with secure software engineering. Enterprises' internal IT departments should informally (that is, without certification) consider selecting and adopting the methodologies' best practices that meet their needs and match their means. Business Impact: The objective of SDLC security methodologies is to reduce security risks by making systems' security engineering a measurable, repeatable, predictable and controlled discipline. The discipline should ensure that security threats and vulnerabilities have been reviewed, their impact recognized, that risks have been assessed, and that appropriate, preventive organizational and management measures have been applied to the software engineering process. Benefit Rating: Moderate Market Penetration: One percent to 5% of target audience Maturity: Emerging Sample Vendors: International Organization for Standardization; International Systems Security Engineering Association; Software Engineering Institute of Carnegie Mellon University Publication Date: 29 June 2007/ID Number: G00147982 Page 9 of 47 © 2007 Gartner, Inc. and/or its Affiliates. All Rights Reserved.
  10. 10. Recommended Reading: "Security as Engineering Discipline: The SSE-CMM's Objectives, Principles and Rate of Adoption" SOA Testing Analysis By: Thomas Murphy Definition: SOA testing tools are designed to assess service-oriented applications. Tools verify XML, perform load and stress testing of services, and promote the early, continuous testing of services as they are developed. These products have to deal with changing standards and should support the interfaces, formats, protocols and the variety of implementations available. Although similar to traditional functional and load testing tools, these products do not rely on a user interface for definition and should deal with issues such as long-running and parallel processes. As these tools mature, links should occur to leverage the produced data with service governance tools, such as security and registry management tools. Position and Adoption Speed Justification: SOA testing tools are new in the market and tend to be from relatively new companies, with improving support from the historic testing leaders. Web services definition and standards are evolving, prompting tool manufacturers to catch up. User Advice: If you have invested in building out Web services, then you should have a solid unit testing approach. Investigate these tools primarily to ensure load capacity for your services, to discover failure behaviors and to speed the development of new services. Testing for services should make use of an existing foundation of tests written for the underlying implementation code. Tests should be factored to enable testing of specifically affected systems when changes are made, rather than testing the entire system. This includes that ability to unit test individual elements, as well as specific orchestrations across services. Business Impact: Web services must be stable and reliable for applications to be built on top of them. They need a solid testing focus or the services will become liabilities to application stability. Because services offer a way to transform the business, these testing tools will be critical to the strategic success of businesses. Benefit Rating: Transformational Market Penetration: One percent to 5% of target audience Maturity: Emerging Sample Vendors: HP; iTKO; IBM; Mindreef; Parasoft; Solstice Software; SOASTA Collaborative Tools for the Software Development Life Cycle Analysis By: James Duggan Definition: SDLC collaborative tools enable communication and collaboration across cultural, geographical and professional boundaries throughout the application life cycle. The features, which were developed in stand-alone products (such as wikis and electronic-meeting systems), are now appearing in multiple development tool markets. The addition of collaboration features can enhance the effectiveness and efficiency of all phases of application development, including analysis, design, construction, testing and deployment, integration, maintenance and enhancement. These features enable customer-to-developer understanding, as well as knowledge capture and transfer. Collaboration features complement and enhance the structured coordination tools that make up most of the application life cycle management suites — for example, workflow, change management and project management solutions. Publication Date: 29 June 2007/ID Number: G00147982 Page 10 of 47 © 2007 Gartner, Inc. and/or its Affiliates. All Rights Reserved.
  11. 11. Position and Adoption Speed Justification: Application delivery globalization — in which applications are built and maintained by teams working all over the world — is growing. However, this growth increases the risk of miscommunication and distortion. As the globalization of application delivery accelerates, it raises the priorities and expedience of technology vendors' efforts to address growing demand. Broad adoption will require collaboration features to support multiple sites, enable federated control and remote monitoring, and incorporate intellectual property and asset protection. User Advice: Coordinate tool evaluation geographically to ensure full consideration of cultural and skill differences across groups. Pilot changes in the process to ensure that distance effects are understood. Business Impact: Gartner expects significant mitigation of the risks posed by the globalization of application delivery. Because of collaborative and globally distributed efforts, cost savings will occur, and revenue will be produced by the new applications. Benefit Rating: High Market Penetration: One percent to 5% of target audience Maturity: Emerging Sample Vendors: BMC Software; CollabNet; Digite; iRise; Sofea; VA Software Enterprise Information Management Analysis By: David Newman Definition: EIM is an integrative discipline for structuring, describing and governing information assets, regardless of organizational and technological boundaries, to improve operational efficiency, promote transparency and enable business insight. EIM is operationalized as a program with a defined charter, budget and resource plan. Position and Adoption Speed Justification: Many organizations have silos of information: inconsistent, inaccurate and conflicting sources with no "single version of the truth." Project-level information management techniques have caused issues in data quality and accessibility. This has led to higher costs, integration complexity, risk and unnecessary duplication of data and process. Results from a Gartner study confirm that EIM is in the early-adopter stage. Findings suggest that the EIM trend will need frameworks, case studies and maturity models to help guide organizations through the benefit realization curve. Certain business drivers, such as compliance, will accelerate adoption as organizations look to fulfill transparency and efficiency objectives from upstream systems to downstream applications. Other triggers for adoption include the information management implications of new development models, such as SOA. SOA places greater emphasis on a disciplined approach to information management. Enterprises will use EIM to support the increased demands for governance and accountability of information assets through formalized data quality and stewardship activities. Adoption toward EIM will also increase as pressure intensifies to consolidate related technologies within organizations for managing both structured and unstructured information assets. Organizations will look for a common framework or infrastructure to converge overlapping technologies and projects in master data management, business intelligence, metadata management, data integration, information access and content management. Organizations will adopt EIM in stages, looking first at foundational activities such as metadata management, master data management, and data mart consolidation; data quality activities; and data stewardship role definition. Publication Date: 29 June 2007/ID Number: G00147982 Page 11 of 47 © 2007 Gartner, Inc. and/or its Affiliates. All Rights Reserved.
  12. 12. User Advice: End-user clients should resist vendor claims that their products "do" EIM. EIM is not a technology market. Clients should connect certain technologies and projects (such as master data management, metadata management, information life cycle management, content management and data integration) as part of an EIM program. Secure senior-level commitment for EIM as a way to overcome information barriers, exploit information as a strategic resource, and fuel the drive toward enterprise agility. Use pressures for improving IT flexibility, adaptability, productivity and transparency as part of the EIM business- case justification. Grow the EIM program incrementally. Pursue foundational EIM activities such as master data management and metadata management. Address operational activities, such as defining the EIM strategy, creating a charter and aligning resources to the program. Operationalize EIM with a defined budget and resource plan. Establish an ICI to share and exchange all types of content. Implement governance processes such as stewardship and data quality initiatives. Set performance metrics (such as reducing the number of point-to-point interfaces or conflicting data sources) to demonstrate value. Business Impact: EIM is foundational to complex business processes and strategic initiatives. By organizing related information management technologies into a common, ICI, an EIM program can reduce transaction costs across companies and improve the consistency, quality and governance of information assets. EIM supports transparency objectives in compliance and legal discovery. It breaks down information silos by facilitating the decoupling of data from applications — a key aspect of successful SOAs. It establishes a single version of the truth for master data assets. EIM institutes information governance processes to ensure all information assets adhere to quality, security and accessibility standards. Key components of EIM (for example, master data management, global data synchronization, semantic reconciliation, metadata management, data integration and content management) have been observed across multiple industries (such as banking, investment services, consumer goods, retail and life sciences). Benefit Rating: High Market Penetration: One percent to 5% of target audience Maturity: Emerging Recommended Reading: "Business Drivers and Issues in Enterprise Information Management" "Mastering Master Data Management" "From IM to EIM: An Adoption Model" "Data Integration Is Key to Successful Service-Oriented Architecture Implementations" "Gartner Study on EIM Highlights Early Adopter Trends and Issues" "Gartner Definition Clarifies the Role of Enterprise Information Management" "Key Issues for Enterprise Information Management, 2007" Application Quality Dashboards Analysis By: Thomas Murphy Definition: These tools give an overall view of code quality and integrate the various forms of testing to enable a more cohesive test strategy. They are being driven from multiple entry points, including integrated application life cycle management suites, quality management dashboards and portfolio management tools. Initial support only includes functional and performance testing integration. It misses integration with other testing tasks, such as static analysis for security, code Publication Date: 29 June 2007/ID Number: G00147982 Page 12 of 47 © 2007 Gartner, Inc. and/or its Affiliates. All Rights Reserved.
  13. 13. quality and standards compliance. Project-management-office-centric tools will also include information from operational and application management tools to better understand the risks, benefits and costs of applications in production, enabling improved investment choices. Position and Adoption Speed Justification: Acquisition and competition are pushing these tools along rapidly. Eclipse is also shaping the market with the inclusion of several underlying frameworks and metamodels that provide a foundation for integration and reporting. However, expectations will lead implementation quality. Continued integration with process guidance and the rest of the life cycle will drive maturity. Vendors will still need to expand product coverage to adequately support all phases of the testing life cycle. Operational tools will take an approach oriented toward determining whether the application passed the appropriate gates to be deployed, and portfolio tools will blend this information with operational and help-desk data to help determine projects. Overall, improved reporting will help organizations that use reports to locate areas of concern and measure improvements. User Advice: Expect to require products from multiple software vendors for three to five years, as well as a lack of overall integration of quality metrics reporting. Seek tools with extensible repositories and that simplify the integration of additional data sources. Business Impact: Metrics and a better understanding of software quality can lead to better planning and deployment of resources. Benefit Rating: High Market Penetration: One percent to 5% of target audience Maturity: Adolescent Sample Vendors: 6th Sense Analytics; Atlassian Software Systems; Borland; Compuware; Enerjy; IBM; JetBrains; Mercury; Microsoft; Polarion Event-Driven Architecture Analysis By: Roy Schulte; Yefim Natis Definition: Event-driven architecture (EDA) is a subset of the more general topic of event processing. EDA is an architectural style in which some of the elements of the application execute in response to the arrival of event objects. An element decides whether to act and how to act based on the incoming event objects. In EDA, the event objects are delivered in messages that do not specify any method name (such messages are called event notifications).The event source does not tell the event receiver what operation to perform. An event is something that happens (or does not happen, but was expected or thought possible). Examples include a stock trade, customer order, address change, and a shipment arriving or failing to arrive (under specified conditions). An event may be documented in software by creating an event object (sometimes called plain "event," which then is a second meaning for the term). An event (object) represents or records a happening ("ordinary") event. Examples of event objects include a message from a financial data feed (a stock tick), an XML document containing an order or a database row. In casual discussion, programmers often call the message that conveys an event object an "event." Position and Adoption Speed Justification: Computer systems have used event processing in many different ways for decades. Event processing is moving through the Hype Cycle now because its concepts are being applied more broadly and on a higher level. Business events, such as purchase orders, address changes, payments, credit card transactions or Web "clicks" are being used as a focus in application design. This contrasts to past treatments of events where business applications addressed events more indirectly, and event modeling was considered to Publication Date: 29 June 2007/ID Number: G00147982 Page 13 of 47 © 2007 Gartner, Inc. and/or its Affiliates. All Rights Reserved.
  14. 14. be secondary to data modeling, object modeling and process modeling. Businesses have always been real-time, event-driven systems, but now more aspects of their application systems are also real-time systems. EDA concepts are also used on a technical level to make application servers and other software more-efficient and scalable. The spread of other types of SOA (conventional, request/reply SOA) is also helping to pave the way for EDA because some of the concepts, middleware tools and organizational strategies are the same. User Advice: In an era of accelerating business processes, pervasive computing and exploding data volumes, companies must master event processing if they are to thrive. Companies should use event processing in two ways: to engineer more-flexible application software through the use of message-driven processing, and to gain better insight into current business conditions through complex-event processing (CEP). Architects can use available methodologies and tools to build good EDA applications, but must consciously impose an explicit focus on events because standard methodologies and tools do not yet make events first-class citizens in the development process. Companies should implement EDA as part of their SOA strategy because many of the same middleware tools and organizational techniques (such as using an SOA center of excellence [COE] for EDA and for other kinds of SOA) apply. Companies should not implement request/reply SOA now and wait for one or two years to implement EDA SOA because a request/reply-only SOA strategy will not be able to support some business requirements well. Business Impact: EDA is relevant in every industry. Large companies experience literally trillions of ordinary business events every day, although only a minority of these are represented as event objects, and only a tiny minority of those event objects are fully exploited for their maximum information value. The number and size of event streams are growing as the cost of computing and networking continues to drop. Companies now generate data on events that were never reported in the past. The CEP type of business EDA was first used in financial trading, energy trading, supply chain management, fraud detection, homeland security, telecommunications, customer contact centers, logistics and sensor networks, such as those based on radio frequency identification (RFID). Event processing is a key enabler in business activity monitoring (BAM), which makes business operations more visible to end users. Benefit Rating: Transformational Market Penetration: Five percent to 20% of target audience Maturity: Adolescent Sample Vendors: Actimize; Agent Logic; Agentis Software; Aleri; Avaya; Axeda; BEA Systems; coral8; Cordys; Event Zero; Exegy; firstRain; IBM; jNetX; Kabira; Kx Systems; open cloud; Oracle; Progress Software/Apama; Red Hat (Mobicents); Rhysome; SAP; SeeWhy; StreamBase Systems; Sun; Sybase; Syndera; Synthean; Systar SA; Tibco Software; Truviso; Vayusphere; Vhayu; Vitria Technology; WareLite Metadata Repositories Analysis By: Michael Blechar; Jess Thompson Definition: Metadata is an abstracted level of information about the characteristics of an artifact, such as its name, location, perceived importance, quality or value to the organization, and relationship to other artifacts. Technologies called "metadata repositories" are used to document, manage and perform analysis (such as change impact analysis and gap analysis) on metadata in the form of artifacts representing assets that the enterprise wants to manage. Repositories cover a wide spectrum of metadata/artifacts, such as those related to business processes, components, data/information, frameworks, hardware, organizational structure, services and software in Publication Date: 29 June 2007/ID Number: G00147982 Page 14 of 47 © 2007 Gartner, Inc. and/or its Affiliates. All Rights Reserved.
  15. 15. support of focus areas like application development, data architecture, data warehousing and enterprise architecture (EA). Position and Adoption Speed Justification: Most organizations that have tried to implement a single enterprise metadata repository have failed to meet the expected return on investment. Community-based repositories supporting business process modeling and analysis, SOA and data integration have shown benefits in improved quality and productivity through an improved understanding of the artifacts, impact queries and the reuse of assets such as services and components. For the near future, there will be no proved, viable solution that federates multiple metadata repositories (or federates repositories with other technologies that contain metadata, like service registries holding runtime metadata artifacts) sufficiently to satisfy the needs of organizations. Mainstream IT organizations will find that the most pragmatic approach to metadata management and reporting is to have multiple, community-based repositories, which have some degree of federation and synchronization. Although it is possible to create federated queries across multiple repositories, many organizations still may want to consolidate and aggregate selected metadata information from disparate sources into a "metadata warehouse" for ease of reporting and for ad hoc query purposes. Leading metadata repository vendors are well-positioned to meet this need, but competitors will emerge, including large, independent software vendors (ISVs), which will look to provide these capabilities in their tool suites. Large vendors, such as IBM, Oracle and SAP, are adding repositories — or are improving their repository support for design-time and runtime platforms — to enhance metadata management support for their development and deployment environment. As a result, Gartner expects to see a broader degree of acceptance by customers, along with a consolidation in this market during the next few years. We position metadata repositories as being two to five years from plateau, because most Global 1000 companies have purchased metadata repositories and are not yet aggressively seeking replacements, and because most new buyers are less-sophisticated IT organizations looking to large ISVs to improve their federation capabilities before committing to the new tools. As a result, most repository purchases will be tactical in nature based on the needs of specific communities, such as data warehousing and SOAs. User Advice: Owing to the diversification and consolidation of metadata management solutions, the enterprise uber-repository market no longer exists. Consider the acquisition or extension of using a metadata repository as part of moving to SOAs, or consider implementing BPM, data architecture, data warehousing and EA initiatives. Most organizations will be best-served by living with metadata in multiple tools or by using different repositories based on communities of interest, with some limited bridging or synchronization to promote the reuse and leveraging of knowledge and effort. Organizations that need to approximate the capabilities of an enterprise metadata repository are still best-served by solutions from leading repository vendors. Business Impact: Metadata repository technology can be applied to aspects of business, enterprise, information and technical architectures, including the portfolio management and cataloging of software services and components; business models; data-warehousing ETL rules; business intelligence transformations and queries; data architecture; electronic data interchange; and outsourcing engagements. Benefit Rating: High Market Penetration: Five percent to 20% of target audience Maturity: Adolescent Sample Vendors: Allen Systems Group; BEA Systems; LogicLibrary Publication Date: 29 June 2007/ID Number: G00147982 Page 15 of 47 © 2007 Gartner, Inc. and/or its Affiliates. All Rights Reserved.
  16. 16. Recommended Reading: "Are Federated Metadata Approaches to Business Service Repositories Valid?" "Best Practices for Metadata Management" "Metadata Management Technology Integration Cautions and Considerations" "Metadata Repositories Address Disparate Sets of Needs" "The Evolving Metadata Repository Market" RIA Platforms Analysis By: Ray Valdes Definition: Rich Internet Application (RIA) platforms enable organizations and software vendors to build applications that provide a richer, more-responsive user experience compared to older- generation, "plain browser" Web platforms. RIA platforms and technologies span a range of approaches that, from a runtime perspective, fall into three basic categories: browser-only, enhanced browser and an outside-the-browser. The browser-only approach is known as Ajax, which leverages the capabilities that are already built into every modern browser (for example, Firefox, Internet Explorer, Opera and Safari), such as the JavaScript language engine and the Document Object Model support. The Ajax approach is supported by vendors, such as Backbase, Jackbe and Tibco, and by open-source toolkits, such as Dojo and Kabuki. The enhanced-browser approach begins with a browser and extends it with a plug-in or other browser-specific machine-executable component (unlike the JavaScript-centric Ajax approach, which is mostly browser-independent). Examples of this approach are Adobe Flash (further enhanced by Adobe Flex server-side technology), Google Gears, Microsoft Silverlight and the Curl RIA platform from Curl. The outside-the-browser approach means adding some large-footprint system software to the client operating environment, such as the Java Virtual Machine (JVM) runtime, the Microsoft .NET language environment or the Adobe Integrated Runtime (AIR) software stack. On top of this stack can be additional layers that add capabilities for client-side data persistence, automatic provisioning and versioning of platforms and applications, and migration of server-side component models. Examples of this approach include Adobe AIR, IBM Lotus Expeditor, Microsoft Windows Presentation Foundation and Sun JavaFX. Position and Adoption Speed Justification: Major system vendors, such as IBM and Microsoft, have been talking about a "rich client" or "smart client" alternative to plain browser-based user interfaces since the early part of this decade. The concept and road map was driven as much by a vendor's agenda for maintaining a system software footprint on a user's devices (desktop PCs, laptops and PDAs) that was more than a basic browser, which was perceived to be commodity technology. However, in 2005, the use of Ajax (a "basic browser" technology) appeared on the scene and enjoyed explosive growth, blind-siding vendors' road maps based on heavyweight technologies (for example, Microsoft WinForms with ClickOnce technology). In 2007, there have been high-profile new initiatives — such as Adobe AIR, Microsoft Silverlight, IBM Lotus Expeditor and Sun JavaFX — that indicate a renewed effort on the part of vendors to go beyond the basic browser. User Advice: To gain real value from RIA technology, invest in an enhanced development process based on empirically proven usability design principles and on continuous improvement before investing in any user interface technology. Publication Date: 29 June 2007/ID Number: G00147982 Page 16 of 47 © 2007 Gartner, Inc. and/or its Affiliates. All Rights Reserved.
  17. 17. Business Impact: A user experience that is perceptively better than other offerings in a product category can provide sustainable, competitive advantage. Consider the flagship examples of the RIA/Ajax genre, such as Google's Gmail, Maps and Calendar applications, which achieved high visibility and strong adoption despite entering late into a mature and stable product category. However, competitive advantage is not a guaranteed result of RIA technology deployment, and depends on innovations in usability (independent of technology) and on server-side architectures that complement client-side user interface technology. Many organizations do not have the process maturity to deliver a consumer-grade user experience and will need to acquire talent or consulting resources to achieve positive business impact. Benefit Rating: High Market Penetration: Five percent to 20% of target audience Maturity: Emerging At the Peak Application Testing Services Analysis By: Frances Karamouzis; Allie Young; Lorrie Scardino Definition: Application testing services include all types of validation, verification and testing services for quality control and assurance within the application development life cycle to deliver software that is developed according to defined specifications and will operate in a production environment. Testing services, which have always been an integral part of the application development life cycle, are now increasingly carved out as a separate competency area, often supported by a distinct development methodology. Testing services may be performed manually or with automation tools, and carried out by internal IT resources or by ESPs. The scope of application testing services includes various functions that go by different names, such as unit testing (which is done by the application developers), integration testing, system testing, functional testing, regression testing, performance/stress testing, usability testing and security testing. Application testing applies to custom application development or packaged applications, as well as single applications or many applications. When externally sourced, application testing services may be purchased as staff augmentation, discrete project work and longer-term outsourcing engagements. Position and Adoption Speed Justification: In the past three years, more attention and focus have been placed on testing services. Several business factors have accelerated this focus: • Organizations increasingly recognize the business need to achieve a more predictable and consistent software development process, including all levels of testing and QA. • The cost of software defects is better understood today than it has been historically because organizations are getting better at baselining costs and performance/service levels as part of a larger sourcing strategy. • Accelerated release cycles for business applications is a reality, with more applications directly touching the customer and application availability directly tied to revenue performance. The cost of software defects is more visible in many industries. • When organizations look for additional ways to cut costs, especially after already outsourcing, testing and QA emerge as good candidate services. Publication Date: 29 June 2007/ID Number: G00147982 Page 17 of 47 © 2007 Gartner, Inc. and/or its Affiliates. All Rights Reserved.
  18. 18. • The rise in the use of external providers for application development — especially Indian offshore providers — has raised awareness of the need for improved processes and methodologies. • More service providers are aggressively marketing application testing services. These factors have converged to accelerate the hype associated with application testing services. On the demand side, organizations have great expectations when they decide to externally source testing services but often have not considered the implications of doing so, or the way in which they should structure the contract and relationship. IT decision makers generally do not engage the right number and level of developers in the planning process, and keep business users at the periphery. This leads to integration problems and conflicts among resources, made more significant because an external source is assessing the quality of others' work products, which often includes the products of other external sources. On the supply side, the opportunity to leverage low-cost labor by using offshore resources in off- hours (when compared to the client's work day) testing is especially appealing to pure-play offshore providers. They have invested in expanding their testing services to offer them as stand- alone services to an existing client base. Niche providers also emerged in offshore locations as testing specialists. When demand reached critical mass, traditional providers started to see the opportunity and began to make investments to compete with the offshore providers. Thus, there has been a rapid proliferation of providers that claim to have application testing expertise. Providers will accept work, especially from an existing client, in virtually any way the client wants to scope and pay for it. This opportunistic approach perpetuates an environment that lacks standards for scope of work, service levels, price, contractual terms and other attributes that are consistent with an immature and hyped service offering. User Advice: If isolating testing functions makes sense as part of your sourcing strategy, then ensure that you have a well-defined scope, clear performance requirements, measurable success criteria and engagement with all the application and user groups that will be integral to the testing process. As a discrete function, the organization must have the resources, methodology and practices in place to provide output to the testing provider, and then receive input when the function is completed. Many organizations can operate in this type of environment, while many others prefer broader accountability, such as what exists at the application level. When evaluating providers, ensure that you give proper weighting to the level of maturity, automation and process standardization that the provider has achieved in testing services when offered and delivered as stand-alone services. Consider providers with dedicated business units for testing with consistent revenue growth for that business area. If the business unit is relatively new, then require the provider to demonstrate its commitment to this market. Check references carefully and match your specific requirements to similar engagements. View testing as part of the application development life cycle, even if it is externally sourced as a discrete function. Ensure alignment between the application development methodology and the testing methodology. Build knowledge transfer into the outsourcing action plan. The selected provider will need to learn your methodology, and you will need to learn its. Organizations that want to leverage a provider's intellectual property must pay special attention to knowledge transfer and training during the transition process. Application testing services may be purchased in various ways, and organizations need to be clear about their objectives and the value proposition of each option. Staff augmentation is used to address resource constraints. Organizations are responsible for directing the resources and ensuring the outcomes. Discrete project work is typically used in two scenarios: for a specific application development effort that requires independent testing or as a consulting-led project to Publication Date: 29 June 2007/ID Number: G00147982 Page 18 of 47 © 2007 Gartner, Inc. and/or its Affiliates. All Rights Reserved.
  19. 19. evaluate the efficacy of changing the way application testing is performed. These consulting-led projects are often described as pilot programs and will often lead to long-term outsourcing contracts. Finally, testing services purchased through an outsourcing contract signal the organization's commitment to leverage the market's expertise and assign delivery responsibility to an external source. Organizations considering various sourcing options are likely to find an aggressive sales approach to broaden the scope of application services beyond the organization's intent. In many cases, a broader scope of work might provide benefits from leveraging the provider's process maturity to build quality into the software as opposed to simply testing the quality of the software. Although this is a worthwhile aspiration, organizations must ensure that they are prepared to invest in broader quality programs before engaging in relationships of this nature. Business Impact: The major business impacts of application testing services include: • Cost savings in the discrete application development life cycle and the longer-term • The ongoing cost of maintaining the application; decreased time to implement new applications or functionality • Increased rigor and productivity by resources throughout the development process • Improved performance of applications once they're in production • Better and more-consistent quality control processes Many organizations do not know how much they are spending on application testing and software QA, nor do they understand the true cost of inadequate testing processes. Furthermore, most do not have discretionary budgets to develop world-class testing services. The lack of testing and QA standards and consistency often leads to business disruption, which can be costly. However, most organizations do not use a process that links testing failures to business disruption on a cost basis. Application testing is a case where the use of an external provider can be effective but sometimes difficult to clearly demonstrate. Benefit Rating: Moderate Market Penetration: One percent to 5% of target audience Maturity: Emerging Sample Vendors: AppLabs Technologies; Aztecsoft; Cognizant; EDS; Hexaware; IBM; Infogain; Infosys Technologies; Keane; Satyam Computer Services; Tata Consultancy Services; Thinksoft; Wipro Technologies SOA Governance Technologies Analysis By: Frank Kenney Definition: The key to being successful with your SOA projects is to understand and control your SOA artifacts. SOA artifacts can include services, SOA policies (that is, service-level agreements), business processes and profiles of consumers and providers. The key to understanding and controlling these artifacts is SOA governance. Various technologies can help you control how your artifacts are being used, managed, secured and tested, as well as how visible they are. These technologies include: SOA policy management provides the technology to create, discover, reference and sometimes enforce policies related to SOA artifacts, such as access control, performance and service levels. Publication Date: 29 June 2007/ID Number: G00147982 Page 19 of 47 © 2007 Gartner, Inc. and/or its Affiliates. All Rights Reserved.
  20. 20. SOA registries and repositories help manage metadata related to SOA artifacts (for example, services, policies, processes and profiles) and have recently evolved to include the creation and documentation of the relationships (that is, configurations and dependencies) between various metadata and artifacts. SOA QA and validation technologies validate the individual SOA artifacts, and determine the relationships to each other within the context of an SOA deployment. For example, these technologies will test and validate a composite service that executes specific processes, while having specific policies enforced on it. Monitoring is present throughout the individual technical domains and enables companies to study an SOA and its environment and provide deeper, real-time business intelligence and analytics applications. It also helps them checking that the various governance processes are actually followed. Business activity management (BAM; see "MarketScope for Business Activity Monitoring Platforms, 3Q06") plays a key role in the evolution and agility of an SOA and is the foundation for future complex event processing scenarios as the SOA life cycle (a cycle of developing, testing, deploying, monitoring, analyzing and refining). Adapters, interfaces, application program interfaces and interoperability standards enable all the technical domains to communicate and share information, as well as enable the governance suite to be integrated with existing infrastructure applications, such as business applications, integration middleware or OSs for optimal policy definition and executions. Position and Adoption Speed Justification: SOA governance technologies, specifically the service registry, and SOA policy enforcement (service management and service security) have been hyped by vendors and end users; many end users are deploying these technologies without credible SOA governance organizational processes and strategies. As a result, service registries and policy enforcement tools are often underused today (only for cataloging and XML security). With more vendors entering into OEM agreements and partnerships with best-of-breed vendors, these technologies will reach the Peak of Inflated Expectations within 12 months. However, because most SOA deployments will likely fail without proper governance, companies will eventually move to better leverage SOA governance technologies to provide visibility, manageability, monitoring security and QA. User Advice: Regardless of the overhyping of SOA governance, companies deploying SOAs need to first develop a strategy and process for SOA governance that encompass technologies and organizations. Deploying a service registry for reuse and developing some policies around the development of services is a good start, but companies should plan on using that registry for SOA life cycle management and for visibility into various SOA artifacts. Business Impact: Any company or division deploying an SOA will be impacted by SOA governance. Entities providing software as a service, integration as a service, business-to- business services or hosting applications should take advantage of SOA governance technologies to enhance their offerings, better manage their SOA artifacts and obtain competitive differentiation. Benefit Rating: High Market Penetration: Five percent to 20% of target audience Maturity: Adolescent Sample Vendors: Actional; AmberPoint; BEA; HP-Mercury; iTKO; IBM; Layer 7 Technologies; LogicLibrary; Oracle; Reactivity; Software AG/webMethods; SOA Software; Tibco Software; Vordel; WebLayers Publication Date: 29 June 2007/ID Number: G00147982 Page 20 of 47 © 2007 Gartner, Inc. and/or its Affiliates. All Rights Reserved.
  21. 21. Recommended Reading: "Criteria for Evaluating a Vendor's SOA Governance Strategy" "No 'Leader' Exists in SOA Governance … At Least Not Yet" Globally Sourced Testing Analysis By: Partha Iyengar; Thomas Murphy; Allie Young Definition: Globally sourced (offshore) testing involves the delivery and support of applications testing services — such as functional, stress, regression and usability testing — using a global delivery model (GDM). Position and Adoption Speed Justification: Service vendors, primarily from among the leading broad-based offshore providers, are increasingly focusing on application functional, stress, regression and usability testing services using the GDM. These organizations offer a wide variety of services, with a high degree of competence, emanating from the historical focus on process and quality that they have made a differentiating factor of the offshore model. The added benefit of cost-arbitrage-driven lower pricing to clients is also a compelling factor that has made this one of the fastest-growing service lines in global sourcing. The strong growth of this class of offerings has driven many of the large traditional service providers, as well as some pure-play testing service providers, to increasingly focus and expand on this service line. Some organizations are also building COE-style testing factories to move toward increased levels of automation support for testing, as well as to help bring about the paradigm of "building quality into the software," as opposed to "testing quality into the software." The ability to effectively outsource testing services using an offshore labor model is challenged by poorly written or understood specifications, as well as the problems that result from inexperience with the GDM. Challenges in communicating effectively during the development process are common for first-time users; however, the allure of offshore labor cost savings is driving interest in these services, and offshore testing services engagements have been highly successful. Many engagements will fail to meet expectations until collaborative environments improve and expectations become realistic. A higher degree of focus and emphasis also needs to be on "equalizing" the widely differing process capabilities and levels of the typical client enterprise and its service providers. However, the path is well-trod by ISVs that provide models for successful use. Longtime users of offshore vendors for development have found their offshore testing efforts to be extremely successful, because they've figured out the requirements for process/communication issues. First-time users should start small — typically with a pilot —- and then work toward larger-scale efforts. A growing number of firms also offer mixed models with on- site, "nearshore" and offshore options to create more-effective communication paths. User Advice: Explore outsourced testing for applications in maintenance and to assist with performance-testing upgrades to software packages. However, organizations must first have models and/or documents that enable test planning, or use outside expertise to create them. The obvious opportunity is to effectively set the stage to leverage the service providers testing offerings by using their expertise to improve basic process levels in the internal testing environment, models and artifacts of the enterprise. These models should be kept up-to-date during the project, with an effective communication and versioning process. They will provide a richer collaboration and communication medium with the service provider. Business Impact: Offshore testing may reduce expenditures for testing and provide more- thorough testing practices if the appropriate documentation and version management processes are used. However, the long-term goal should be to move to a paradigm of building quality into Publication Date: 29 June 2007/ID Number: G00147982 Page 21 of 47 © 2007 Gartner, Inc. and/or its Affiliates. All Rights Reserved.
  22. 22. the software, as opposed to testing quality in, as well as to move toward automated testing process and environments. Benefit Rating: High Market Penetration: Five percent to 20% of target audience Maturity: Adolescent Sample Vendors: AppLabs Technologies; Cognizant Technology Solutions; IBM Global Technology Services; Infogain; ReadyTestGo; Tata Consultancy Services; Wipro Technologies Model-Driven Architectures Analysis By: David Norton; David Cearley; David McCoy Definition: The term "Model Driven Architecture" is a registered trademark of the Object Management Group (OMG). It describes OMG's proposed approach to separating business-level functionality from the technical nuances of its implementation (see www.omg.org/mda). The premise behind OMG's Model Driven Architecture and the broader family of model-driven approaches (MDAs) is to enable business-level functionality to be modeled by standards, such as Unified Modeling Language (UML) in OMG's case; allow the models to exist independently of platform-induced constraints and requirements; and then instantiate those models into specific runtime implementations, based on the target platform of choice. "Model-driven," as in "model-driven software engineering," is a commonly (if sometimes generically) used prefix that denotes concepts in which an initial model creation period precedes and guides subsequent efforts, including model-driven application development, such as SODA; model-driven engineering; and model-driven processes, such as BPM. "Model-driven" has become a "catchall" phrase for an entire genre of approaches. Position and Adoption Speed Justification: Core supporting standards, such as UML (referenced by OMG's Model Driven Architecture) are well-established; however, comprehensive MDAs as a whole are less mature than their constituent supporting standards in terms of vendor support and actual deployment in the application architecture, construction and deployment cycle. An MDA represents a long-standing goal of software construction that has seen prior incarnations and waves of Hype Cycle positioning (for example, computer-aided software engineering technology). The goal remains the same: Create a model of the new system, and then enable the model to become transformed into the final system as a separate and significantly simplified step. As always, such grand visions take time to catch on, and they face significant hurdles along the way. A new wave of model-driven hype is emerging. User Advice: Technical and enterprise architects should strongly consider the implications of implementing architectural solutions that are not MDA-compliant. However, all major vendors will provide adherence, to at least some degree, in their tools, coupled with best-practice extensions beyond MDA standards. Organizations implementing SOAs should pay close attention to the MDA standards and consider acquiring tools that automate models and rules. These include architected rapid application development (ARAD) and architected model-driven (AMD) technologies and rule engines supporting code-generating and non-code-generating (late binding) implementations. AMD is primarily suited to complex projects that require a high degree of reuse of business services, where you can put significant time into business process analysis (BPA) and design. At the same time, no competent organization would want to do AMD-only development, because the additional time and cost of the analysis and design steps would not bring adequate return on Publication Date: 29 June 2007/ID Number: G00147982 Page 22 of 47 © 2007 Gartner, Inc. and/or its Affiliates. All Rights Reserved.
  23. 23. investment or agility for time- and/or budget-constrained application development projects. The ideal solution is to mix AMD, ARAD and rapid application development (RAD) methods and tools. Business Impact: MDAs reinforce the focus on business first and technology second. The concepts focus attention on modeling the business: business rules, business roles, business interactions and so on. The instantiation of these business models in specific software applications or components flows from the business model. By reinforcing the business-level focus and coupling MDAs with SOA concepts, you end up with a system that is inherently more flexible and adaptable. If OMG's Model Driven Architecture or the myriad related MDAs gain widespread acceptance, then the impact on software architecture will be substantial. All vertical domains would benefit from the paradigm. Benefit Rating: High Market Penetration: One percent to 5% of target audience Maturity: Emerging Sample Vendors: BEA Systems; Borland; Compuware; IBM; Kabira; OMG; Pegasystems; Telelogic; Unisys Scriptless Testing Analysis By: Thomas Murphy Definition: Scriptless-testing tools are second-generation testing tools that reduce the amount of manual scripting needed to create tests using data-driven approaches. The goal is to keep the test project from becoming another development project, and to enable business user testing. These tools have a broad set of pre-defined objects that can interact with the application being tested, including error handling and data management. As the tools mature, they'll continue to shift toward a more MDA. Position and Adoption Speed Justification: Although these tools reduce the amount of code to be written, they don't remove the need for skilled testers. Scriptless testing makes it easier for business analysts to be involved in testing efforts, but the analysts must still be paired with quality engineers to drive testing effectiveness. This is especially important with packaged applications. The emergence and changing nature of SOA and the tools supporting it will extend the time needed for this market to mature, and additional areas (such as data management) will suppress the expected results. Tools and users will reach the Slope of Enlightenment during the next two years and take another three to five years to reach the Plateau of Productivity. The promise of being "script-free" has existed for several years; however, although improvements have been made, it's unlikely that all scripts can be removed for all applications. Expect the greatest benefits to come from domain-limited tools. Tools will also gain capabilities as model- oriented approaches appear, but these will require skills and model management to be effective. User Advice: Evaluate tools that reduce the cost of testing. In addition, recognize that these tools aren't meaningfully well-integrated with leading application life cycle management suites, which reduces a team's ability to coordinate effectively. Although these tools will reduce the need for scripting, well-designed tests still require skill — and business users typically don't have the right skills and mind-set for this. Business Impact: Scripting-centric tools are labor-intensive not only for the initial creation, but also for maintenance. Scriptless testing will reduce overall testing costs and enable better coverage, which should lead to improved defect detection earlier in the development cycle (thus Publication Date: 29 June 2007/ID Number: G00147982 Page 23 of 47 © 2007 Gartner, Inc. and/or its Affiliates. All Rights Reserved.
  24. 24. further reducing overall application costs). However, expectations should be managed. Organizations still need qualified testers, and tools continue to have limitations. Benefit Rating: High Market Penetration: Five percent to 20% of target audience Maturity: Early mainstream Sample Vendors: Agitar Software; HP; Original Software; Worksoft Architected, Model-Driven SODA Analysis By: David Norton Definition: AMD project approaches to SODA are appropriate for applications, services and components that require robust analysis to understand business rules and requirements, and to automate their design and delivery for maximum reuse and performance. Model-driven can also be subdivided into a transformation approach where 100% code generation is the norm or the models are executable. The second approach is elaboration, with pattern and frameworks being used to partially generate implementation. Position and Adoption Speed Justification: Business process automation, UML methodologies and best practices are still evolving. They must capture service-oriented business models and rules at a sufficient level of detail for integrated tools to automate or facilitate the Enterprise JavaBeans (EJB) and C# components based on them. The more-widely used ARAD tools are increasingly adding integrated UML and business process modeling capabilities, and bidirectional bridges to leading modeling tools — evolving their technologies beyond ARAD into AMD. Moreover, as users of traditional client/server integrated model-driven development tools migrate their application portfolios to new SOAs, they are expected to replace them with the next- generation of SODA AMD tools. User Advice: Organizations that use traditional client/server integrated model-driven development tools should consider using a next-generation AMD tool in conjunction with developing new or replacement SOA applications. Organizations that have no experience with integrated model-driven technologies are advised to evolve to AMD approaches as they extend the use of their ARAD tools into more model-driven SODA projects. Other organizations that have committed to top-down enterprise or business architecture modeling efforts should strongly consider adding an AMD tool to leverage their models through code or rule automation to improve productivity, quality and compliance. Mature AMD organizations developing applications for legacy third-generation language and fourth-generation language (4GL) need to assess migration to new AMD tools carefully. The newer tools support a more-bidirectional model-code, elaboration approach compared with the old 100% transformation method that some older tools use. Warning: Model-driven development requires a level of sophistication beyond the capability of most developers. Therefore, be selective in the applications you choose to implement and with whom you staff the effort. The time to improve AMD is more than two years. Consider this for long-term but major improvement. Business Impact: Design tools, coupled with code-rule generators, are used to ensure compliance with business and technical models and architectures, while providing productivity and quality improvements. Coupled with service-oriented and component-based methodology focused on reuse, and an established base of reusable business and technical artifacts, factor productivity gains of 10 times or more across the development life cycle are common — but this generally takes three or more years to achieve. Moreover, AMD approaches are appropriate only Publication Date: 29 June 2007/ID Number: G00147982 Page 24 of 47 © 2007 Gartner, Inc. and/or its Affiliates. All Rights Reserved.
  25. 25. for a subsection of the application portfolio, so they should be coupled with ARAD and other rapid development tools as part of an application development tool suite. Benefit Rating: High Market Penetration: One percent to 5% of target audience Maturity: Emerging Sample Vendors: CA; Compuware; IBM; Mia-Software; Oracle; Telelogic; Wyde Enterprise Architecture Tools Analysis By: Greta James Definition: Enterprise architects need to bring together information on a variety of subjects, including business processes, organization structures, applications, data (structured and unstructured), technology of various kinds and interfaces. Architects need to understand and represent the relationships between this information and communicate it to their stakeholders. EA tools address this need by storing information in a repository and providing capabilities to structure, analyze and present the information in a variety of ways. An EA tool should also have a metamodel that supports the business, information and technology viewpoints, as well as the solution architecture. The repository should support relationship integrity among and between objects in these viewpoints/architectures. It should also have the ability to create or import models and artifacts and to extract repository information to support stakeholder needs, including extracts in graphical, text and executable forms. Position and Adoption Speed Justification: Most tools have come from a modeling or a metadata repository origin, with the modeling-heritage tools having better visualization capabilities, and the repository-heritage tools generally having better import/export and management capabilities. As the market has matured, vendors have rounded out their capabilities. Small, private companies, several of which are European, predominate in this market. While there were two mergers/acquisitions in 2005, more activity of this kind is anticipated. We expect large technology vendors such as IBM, Microsoft and Oracle to enter this market, most likely through acquisition. Although all vendors have sales offices in North America and Europe, only four companies — ASG, IDS Scheer, Sybase and Telelogic — have an extensive direct presence elsewhere. This is unlikely to change in the short term without additional acquisition activity. This market has gradually matured over the past year, with vendors continuing, by and large, to enjoy healthy license revenue growth. Vendors have also continued to add features to their products, such as improving their ability to import information about packaged applications and to analyze information in or derived from their repositories. Vendor support for developing Web sites that make their repository information available and understandable to a range of stakeholders has increased in power and become more widespread. User Advice: When choosing an EA tool, consider five broad functional capabilities: the ability to flexibly structure information in a repository in meaningful ways; the ability of the tool to exchange information with other related tools, possibly supplemented by the ability to generate models and other artifacts within the tool; the ability to analyze the information in the repository; the ability to communicate information to address the needs of EA stakeholders; and the ability to administer and manage information in the repository. As well as the tool functions, consider other factors such as the viability of the vendor; the availability and capability of the vendor's sales and support organization; the vendor's experience Publication Date: 29 June 2007/ID Number: G00147982 Page 25 of 47 © 2007 Gartner, Inc. and/or its Affiliates. All Rights Reserved.
  26. 26. in your industry and any related tool capabilities, such as support for an industry-specific architecture framework; and the vendor's understanding of EA. Business Impact: Business strategists, planners and analysts can derive considerable benefit from an EA tool, because it helps them to better understand the complex system of IT resources and its support of the business. Crucially, this visibility helps to better align IT with the business strategy, as well as providing other benefits, such as improved disaster recovery planning. Benefit Rating: High Market Penetration: Five percent to 20% of target audience Maturity: Adolescent Sample Vendors: ASG; Casewise; IDS Scheer; Mega International; Proforma; Sybase; Telelogic; Troux Technologies Recommended Reading: "Telelogic's System Architect for Enterprise Architecture" "Follow These Best Practices to Optimize Architecture Tool Benefits" "Troux: Innovative Enterprise Architecture Tools" "Cool Vendors in Enterprise Architecture, 2007" Application Security Testing Analysis By: Joseph Feiman; Neil MacDonald Definition: Application security testing is the detection of applications' conditions that are indicative of exploitable vulnerabilities. Position and Adoption Speed Justification: Two technology markets for application security testing have been evolving rapidly — static application security testing (SAST) and dynamic application security testing (DAST). SAST is a source code and binary code testing technology market. Its technologies are applicable at the construction and testing phases of the application life cycle. DAST is a dynamic, black-box application testing (the source code is unavailable to DAST tools) technology market. DAST technologies are applicable at the testing and operation phases of the application life cycle. The adoption of SAST and DAST application security testing is impeded, owing to a lack of application security competence and resources in application development organizations. The solution to this problem is coming in the form of emerging security-as-a-service offerings from technology and service providers, when providers test applications (often remotely) and provide application development organizations with vulnerability reports and security breach remedies. The speed of adoption for application security testing is accelerating because of a pressing need to resolve the collision of two trends: the growing exposure of e-business applications on the Web and the relentless attacks on these applications. The plateau of technology productivity will be reached in two to five years. User Advice: Enterprises must adopt application-testing technologies and processes, because the need is strategic. Yet, they should use a tactical approach to vendor selection, because of the immaturity of this emerging market. Application development organizations should accept that they, not network security specialists, are responsible for the adoption of application security discipline. Publication Date: 29 June 2007/ID Number: G00147982 Page 26 of 47 © 2007 Gartner, Inc. and/or its Affiliates. All Rights Reserved.
  27. 27. Business Impact: Enterprises adopting application security testing technologies and processes will benefit from risk and cost reductions, because these technologies and processes provide early detection and correction of vulnerabilities before applications move into production and become open to attack. Benefit Rating: High Market Penetration: Five percent to 20% of target audience Maturity: Adolescent Sample Vendors: Acunetix; Cenzic; Coverity; Fortify Software; Klocwork; Ounce Labs; SPI Dynamics; Veracode; Watchfire Recommended Reading: "MarketScope for Web Application Security Vulnerability Scanners, 2006" “Market Definition and Vendor Selection Criteria for Source Code Security Testing Tools” Sliding Into the Trough Project and Portfolio Management Analysis By: Daniel Stang Definition: Project and portfolio management (PPM) systems support the business process of effective allocation of capital to projects. They also track and monitor the use of time, people and money to deliver different types of "work." In IT organizations, work can include strategic and nonstrategic IT and non-IT projects, new and existing applications, and new and existing IT services made up of software services and technology, application change or enhancement requests, bug and error fixes, routine maintenance procedures, and help desk and trouble tickets. By tracking work demand and execution against the resources (time, people and money) used to complete the work, PPM systems provide visibility into work performance and allow for more- effective planning, decision making and management of strategic and operational work delivered by IT departments. Position and Adoption Speed Justification: PPM is first and foremost about changing work execution behaviors. The most robust PPM systems are sophisticated enough to manage time reporting through top-down portfolio analysis, optimization and planning, and can manage various types of work items — from simple IT service requests to multiyear, formally defined projects and programs. Organizations interested in these PPM systems, however, are not in a position to support all potential processes suggested by these systems, and, therefore, cannot realize all the benefits without undergoing significant change management. PPM systems have been available for years and have grown in maturity, but the intended audience is PPM process immature. The acquisition of PPM technology is steady, but implementation times can be slowed considerably owing to PPM process immaturity and/or lack of management buy-in. Midmarket solutions are emerging in response to the resonance of the PPM value proposition with smaller IT organizations (fewer than 100 resources in the IT organization), and alternative deployment models are appearing in the marketplace. In addition, PPM systems are tracking more than just projects, and we expect PPM systems to continue expansion from the project portfolio level to support, track and monitor work from IT service management and application life cycle management portions of the IT function. It will be another two or three years before end users and PPM systems reach the Slope of Enlightenment and another two to three years before they reach the Plateau of Productivity. Publication Date: 29 June 2007/ID Number: G00147982 Page 27 of 47 © 2007 Gartner, Inc. and/or its Affiliates. All Rights Reserved.

×