'Architecture Testing: Wrongly Ignored!' by Peter Zimmerer


Published on

State-of-the-art testing approaches typically include different testing levels like reviews, unit testing, component testing, integration testing, system testing, and acceptance testing. There is also common sense that typically unit testing is done by developers (they are responsible to check the quality of their units at least to some extent) and system testing is done by professional independent testers. But, who is responsible to adequately test the architecture which is one of the key artifacts in developing and maintaining flexible, powerful, and sustainable products and systems? History has shown that too many project failures and troubles are caused by deficiencies in the architecture.Furthermore, what does the term architecture testing mean and why is this term seldom used?

To answer these questions, Peter describes what architecture testing is all about and explains a list of pragmatic practices and experiences to implement it successfully. He offers practical advice on the required tasks and activities as well as the needed involvement, contributions, and responsibilities of software architects in the area of testing – because a close cooperation between testers and architects is the key to drive and sustain a culture of prevention rather than detection across the lifecycle.

Finally, if we claim to be in pursuit of quality then adequate architecture testing is not only a lever for success but a necessity. And this results not only in better quality but also speeds up development by facilitating change and decreasing maintenance efforts.

Published in: Technology
1 Like
  • Be the first to comment

No Downloads
Total views
On SlideShare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide
  • From Software Engineering Institute (SEI, http://www.sei.cmu.edu/): MTW: Mission Thread Workshop Build and augment a set of end-to-end System of Systems (SoS) mission threads with quality attribute and engineering considerations with the stakeholders. Outputs will drive SoS and System/Software Architecture Decisions. QAW: Quality Attribute Workshop (http://www.sei.cmu.edu/architecture/tools/qaw/) Quality Attribute Workshops (QAWs) provide a method for identifying a system's architecture critical quality attributes, such as availability, performance, security, interoperability, and modifiability, that are derived from mission or business goals. The QAW does not assume the existence of a software architecture. It was developed to complement the SEI Architecture Tradeoff Analysis Method (ATAM) in response to customer requests for a method to identify important quality attributes and clarify system requirements before there is a software architecture to which the ATAM could be applied. ATAM: Architecture Tradeoff Analysis Method (http://www.sei.cmu.edu/architecture/tools/atam/) The ATAM process consists of gathering stakeholders together to analyze business drivers and from these drivers extract quality attributes that are used to create scenarios. These scenarios are then used in conjunction with architectural approaches and architectural decisions to create an analysis of trade-offs, sensitivity points, and risks (or non-risks). This analysis can be converted to risk themes and their impacts whereupon the process can be repeated.
  • 'Architecture Testing: Wrongly Ignored!' by Peter Zimmerer

    1. 1. Copyright © Siemens AG 2011. All rights reserved.Corporate TechnologyEuroSTAR 2011Manchester, EnglandNovember 24, 2011Peter ZimmererPrincipal EngineerSiemens AGCorporate Technology81739 Munich, Germanypeter.zimmerer@siemens.comhttp://www.siemens.com/corporate-technology/Architecture Testing: Wrongly Ignored!
    2. 2. ContentsMotivationArchitecture testingPractices and tasks in architecture testing Involvement of Software Architects in testing activitiesSummary
    3. 3. MotivationThe system‘s architecture is important … Vehicle for communication among stakeholders – including testers! Providing system context, scoping, and partitioning of development work Foundation for many lifecycle artifacts – including the test basis! Representing early design decisions and constraints:most critical to get right, hardest to change First design artifact addressing (enabling or inhibiting) objectives,non-functional requirements, and quality attributes Key for systematic reuse, integration with legacy / third party software,managing change, cost-effective maintenance and evolution Poor architectures are a major cause for project failures and disaster… so, we should test it thoroughly!
    4. 4. Architecture testing Glossary V2.1, April 2010@Glossary V2.0, Dec 2007A search in the glossary for “architecture“ results in only one hit:design-based testing:An approach to testing in which test cases are designed based on thearchitecture and/or detailed design of a component or system (e.g.tests of interfaces between components or systems).
    5. 5. Architecture testingRequirements testingArchitecture testingDocument testingUnit testingIntegration testingSystem testingAcceptance testing1,010,000102,000191,0004,900,0001,630,0002,770,0001,840,000Result hits* @ :@*Hits counted on September 12, 2011 – ratio still about the same since 2005 …
    6. 6. Testing strategy – Based on risksBase the testing strategy on business goals and priorities Risk-based testing strategy (RBT)Risk identificationRisk = P × D P Probability of failure Frequency of use Chance of failure:criticality & complexity at implementation & usage, lack of quality D Damage (consequence & cost for business & test & usage)Risk analysis – Product risk analysis workshopRisk response – Test objectives, test levels, test design methods …
    7. 7. Product risk analysis for software architectsIdentify especially all risks related to the architecture Architectural requirements (functional and non-functional) Architectural quality attributes Architectural use cases, features, and functions Architectural decisions Design decisions Technology selections Third-party component selections (open source) Bug categoriesActively participate in a product risk analysis workshop as oneimportant stakeholder
    8. 8. Test levels – Example V modelSystemRequirementsArchitecture,DesignUnitSpecificationCodingAcceptanceTestingSystemTestingIntegrationTestingUnitTestingUserRequirementsbased on
    9. 9. Test levels – Example V model with architecture testingSystemRequirementsArchitecture,DesignUnitSpecificationAcceptanceTestingSystemTestingIntegrationTestingUnitTestingUserRequirementsCode QualityManagementArchitectureinterrogation:interviews,interactiveworkshopsArchitecture testingis any testing of architecture and architectural artifacts mapping of architectural risks to test levelsTest case designAnalysis,reviews,previews,inspectionsLoad modelspecificationbased onCodingStatic testingMTW,QAW,ATAMPrototyping,simulationFrom SEI, seewww.sei.cmu.edu/
    10. 10. Test-driven development (TDD) ≈ Preventive TestingPreventive testing is built upon the observation thatone of the most effective ways of specifying something isto describe (in detail) how you would accept (test) itif someone gave it to you.David Gelperin, Bill Hetzel (<1990)Real TDD means to let testing drive and influence the architectureand design as well Do not use TDD only on the level of unit and acceptance testing
    11. 11. Testability – FactorsControl(lability) The better we can control the system,the more and better testing can be done, automated, and optimized Ability to apply and control the inputs to the system under testor place it in specified states (for example reset to start state) Interaction with the system under test (SUT) through control pointsVisibility / observability What you see is what can be tested Ability to observe the inputs, outputs, states, internals, error conditions,resource utilization, and other side effects of the system under test Interaction with the system under test (SUT) through observation pointsControl(lability)Visibility / observability
    12. 12. Design for Testability (DfT) – Who?Joint venture between architects (+ developers) and testers Collaboration between different stakeholdersTestability must be built-in by architects (+ developers) Pro-active strategy:design the system with testability as a key design criterion Testers (test automation engineers) must define testabilityrequirements to enable effective and efficient test automationContractual agreement between design and testBalancing production code and test code Production code and test code must fit together Manage the design of production code and test codeas codependent assetsEducatestakeholdersonbenefitofDfT
    13. 13. Test design methodsTest design methods are not for testers only …Check and foster the usage of test design methods in thedevelopment teamUse test design methods To provide an adequate test basis In architecture reviews, for example scenario testing To design test cases for the architecture,especially for integration testing To prevent bugs (think test-first …)The act of designing tests is one of themost effective bug preventers known.Boris Beizer, 1983
    14. 14. Integration testingThe goal of integration testing is to test in a grey-box manner The interaction of components and subsystems The interaction and embedding with the environment and systemconfiguration The dynamic behavior and communication of the system Control flow and data flow The architecture and design as specified in theSoftware Architecture Description documentSelect an appropriate integration strategythat supports and drives the goals andbenefits of integration testingAddress and follow integration testing needsin the architecture (including testability)Integration testingIntegrationArchitecture
    15. 15. Test architectures, test automationThe architecture of a test system may be even more complex thanthe architecture of the system under testUse good architecture practices to define, specify, create, realize,and maintain the test (automation) system as wellAn efficient quality check of software is not possiblein real project life without the right test automation! Incremental, iterative, agile processes Regression testing Maintenance – refactoring, redesign, reengineering
    16. 16. Regression testing – The big questions are always …Software Architect:Which parts of the architecture must be tested againafter a (code) change?Test Manager:Which test cases must be executed againafter a (code) change?Remark:Change can be a code changebut also any change in the environment or system configurationUse knowledge about the architecture and design of the systemduring change and impact analysis to identify risky areas to select required regression test cases effectively
    17. 17. Architectural quality – internal software qualityand code quality managementChecking of architectural conformance Validate the formalized description Evaluate reported architecture violations Simulate a restructuring of a system (refactoring, redesign,reengineering)Identification of suspicious components and trends (software aging) Detect potential problems early enoughAnalyzing the impact of design changes and design alternatives Use it as input for well-founded decisionsCode comprehension Get a well-rounded knowledgeof the real state of the systemAvoid this:
    18. 18. Practices and tasks in architecture testingUnderstand the mission and the value of testing and promote itRisk-based testing strategy (test levels)Test-driven developmentDesign for testabilityTest design methodsIntegration testingTest architectures, test automationRegression testingArchitectural quality – internal software quality and code qualitymanagement
    19. 19. SummaryThe Software Architect cooperates closely with the Test Manager To define, motivate, drive, and enforce a comprehensiveunderstanding of and attitude to testing and quality in the whole team To sustain a culture of prevention rather than detection across thelifecycleThe Software Architect and the Test Manager address "quality"from different viewpoints: Software Architect:Build quality into the system (or rather into the architecture)and do architecture testing Test Manager:Provide information related to quality (feedback)