Agile teams incrementally deliver functionality based on user stories. In the sprint to deliver features, frequently software qualities such as security, scalability, performance, and reliability are overlooked. Often these characteristics cut across many user stories. Trying to deal with certain system qualities late in the game can be difficult, causing major refactoring and upheaval of the system’s architecture. This churn isn’t inevitable. Especially if you adopt a practice of identifying those characteristics key to your system’s success, writing quality scenarios and tests, and delivering on these capabilities at the opportune time. We will show how to write Quality Scenarios that emphasize architecture capabilities such as usability, security, performance, scalability, internationalization, availability, accessibility and the like. This will be hands-on; we present some examples and follow with an exercise that illustrates how you can look at a system, identify, and then write and test quality scenarios.
Testing System Qualities Agile2012 by Rebecca Wirfs-Brock and Joseph Yoder
Testing System Qualities Rebecca Wirfs-Brock Joseph Yoder Copyright 2012 Rebecca Wirfs-Brock, Joseph Yoder, Wirfs-Brock Associates and The Refactory, Inc.
Introducing RebeccaPresident, Wirfs-Brock AssociatesAgile enthusiast (involved with experience reports since 1st agile conference, board president Agile Open Northwest)First engineering job in Quality AssurancePattern enthusiast, author, and Hillside Board TreasurerOld design geek (author of two object design books, inventor of Responsibility-Driven Design, advocate of CRC cards, hot spot cards, & other low-tech design tools, IEEE Software design columnist)Consults and trains top companies on agile architecture, responsiblity-driven design, enterprise app design, agile use cases, design storytelling, pragmatic testingRuns marathons!!!
Introducing JosephFounder and Architect, The Refactory, Inc.Pattern enthusiast, author and Hillside Board PresidentAuthor of the Big Ball of Mud PatternAdaptive systems expert (programs adaptive software, consults on adaptive architectures, author of adaptive architecture patterns, metatdata maven, website: adaptiveobjectmodel.com)Agile enthusiast and practitionerBusiness owner (leads a world class development company)Consults and trains top companies on design, refactoring, pragmatic testingAmateur photographer, motorcycle enthusiast, enjoys dancing samba!!!
Some Agile Myths• System Qualities can be added at the last moment.• We can easily adapt to changing requirements (new requirements).• You can change the system fast!!!• Don’t worry about the architecture.
Most testers spend the majority of their time writing functional tests…BUT THERE’S A LOT MORE TO TESTTHAT YOUR SOFTWARE WORKS ASADVERTISED
Pragmatic Testing Issues • What kinds of testing should you focus on? • Who writes them? • Who runs them? • When are they run? • How are they run?
Testing System QualitiesQualities we could consider… • Usability • Security • Performance • Scalability • Internationalization • Availability • Flexibility • Accessibility • Location • Regulation
DiscussionWhat are the important qualities inyour system?
Functional Tests vs. System Quality Tests Functional: – How do I …? – Tests user stories work as advertised • “As a reviewer I want to add a note to a chart” • Compute the charge for an invoice – Tests boundary conditions • Can I add more than one note at the same place? • Are excess charges computed correctly?
System Quality Tests• How does the system handle…? – system load …? number of add note transactions/minute under normal load – system support for…? simultaneously updating charts – usability…? ease of locating and selecting notes• Tests that emphasize architecture capabilities and tangible system characteristics
Specify Measurable Results• Meter: An appropriate • Scale: The values way to measure you expect – Natural Scale: • Response time in milliseconds – Constructed: • 1-10 ranking – Proxy: • Projecting throughput using sample data
Example Performance ScenarioSource of Stimulus Artifact Response ResponseStimulus Measure Users Initiate System Transactions order processed Average latency of 2 Environment seconds Normal conditions “Users initiate 1,000 order transactions per minute under normal operations; transactions are processed with an average latency of 2 seconds.”
Possible Performance Scenario Values Portion of Possible Values Scenario Source External systems, users, components, databases Stimulus Periodic events, sporadic or random events (or a combination) Artifact The system’s services, data, or other resources Environment The state the system can be in: normal, overloaded, partial operation, emergency mode… Response Process the event or event sequence and possibly change the level of service Response Times it takes to process the arriving events (latency or Measure deadline by which event must be processed), the variation in this time, the number of events that can be processed within a particular time interval, or a characterization of events that cannot be processed (missed rate, data loss)
Example Security Scenario Source of Stimulus Artifact Response Response Stimulus Measure Data System Tries to within the maintains Correct data Correctly transfer system is restored identified audit trail money Environment within a day of individual between Normal reported event accounts operation“A known, authorized user transfers money between accounts. The user is later identified as an embezzler by the institution they belong to and the system then restores funds to the original account.”
Possible Security Scenario ValuesPortion of Possible ValuesScenarioSource A human or another system. May be identified (correctly or not) or be unknown.Stimulus An attack or an attempt to break security by trying to display information, change information, access system services, or reduce system availabilityArtifact The system’s services or data.Environment The system might be online or offline, connected to or disconnected from the network, behind a firewall or open to the network.Response Authenticates user; hides identity of the user; blocks or allows access to data and/or services; records access/modification attempts by identity; stores data in encrypted format; recognizes unexplained high demand and informs a user or other system, or restricts availability.Response Time/effort/resources required to circumvent security measures withMeasure probability of success; probability of detecting attack; probability of identifying individual responsible for attack or access/modification; time/effort/resources to restore data/services; extent to which data/services are damaged and/or legitimate access denied.
Example Modifiability ScenarioSource of Stimulus Artifact Response ResponseStimulus Measure Developer Add support UI, Source Modification made for new with no schema 2 days to code, service changes code and Service code test, 1 day Code to deploy Table Environment Compile time, data definition“A developer adds support for a new service code to thesystem by adding the service code to the definitions table andmodifying the UI to make it available to users. Themodification is made with no data schema changes ”
Possible Modifiability Scenario Values Portion of Possible Values Scenario Source End user, developer, system administrator Stimulus Wishes to add/delete/modify/vary functionality. Wishes to change some system quality such as availability, responsiveness, increasing capacity. Artifact What is to be changed: system user interface, platform, environment, or another system or API with which it interoperates Environment When the change can be made: runtime, compile time, build time, when deployed,… Response Locates places to be modified; makes modification without affecting other functionality; tests and deploys modification Response End user, developer, system administrator Measure
Example Availability ScenarioSource of Stimulus Artifact Response ResponseStimulus Measure Unknown Unexpected System Record “raw” Report report in database No lost Sensor and log event data Environment Normal conditions “An unknown sensor sends a report. The system stores the raw data in the unknown sensor database (to potentially be processed or purged later) and logs the event. ”
Example Usability Scenario Source of Stimulus Artifact Response Response Stimulus Cancel System Measure request System backs out Cancel takes End user Environment pending less than one At runtime transaction second and releases resources“A user can cancel a pending request within two seconds.”
Copyright 2012 Wirfs-Brock Associates and The Refactory
EXERCISESWRITE A QUALITY SCENARIO FOR A DATACOLLECTION AND ANALYSIS SYSTEM
Write A Quality Scenario• Using the template handout, write a quality scenario. Be specific.• Options: – Response to data received from an unknown sensor. – Adding new analyzer plug-ins. – Relocating a sensor and being able to correlate data from its previous location (if desired). – Detecting and troubleshooting equipment failures – Detecting unusual weather conditions and incidents• What quality does your scenario address?
You can’t test warm and fuzzy…“It should be easy to place an online order”TURN VAGUE STATEMENTS INTOCONCRETE MEASURABLE ACTIONS
Turning Warm Fuzzies into a Testable Usability Scenario Source of Stimulus Artifact Response Response Stimulus System Measure Place Order Novice order completed Time to user Environment complete order entry task Web interface with online help“80% of novice users should be able to place an order in under 3 minutes without assistance.” or“80% of novice users should be able to place an order in under 3 minutes only using online help.”
Some options…• Toss out a reasonable number, then discuss to come to a consensus• Average informed individuals’ estimates• Use an existing system as baseline• Values for similar scenarios• Benchmark working code• …
There is more than “pass” or “fail” • Landing Zone: Lets you define a range of acceptable values – Minimal: OK, we can live with that – Target: Realistic goal, what we are aiming for – Outstanding: This would be great, if everything goes well
Landing Zones • Roll up product or project success to several key scenarios • Easier to make sense of the bigger picture: – What happens when one quality scenario edges below minimum? How do others trend? – When will targets be achieved? At what cost?
Minimum Target Outstanding Throughput (txns 50,000 70,000 90,000Performance per day) Average txn time 2 seconds 1 second < 1 second Intersystem data 95% 97% 97% consistency (perData Quality cent critical data attributes consistent) Data accuracy 97% 99% >99% Managing Landing Zones Too many scenarios and you lose track of what’s really important Define a core set, organize and group Roll up into aggregate scenarios Re-calibrate values as you implement more functionality…be agile
Quality Testing Cycle for TDD Identify and Write Code Verify Write Quality and Tests Quality Scenarios Scenarios Clean up Code (Refactor/revise /rework)Ship Check Ready to all tests succeedit!!! all Tests Release? Succeed
Test Coverage Can Overlap… Smoke Tests Quality Scenarios Integration Tests Acceptance Tests (Functional and qualities) Unit Tests
Pragmatic Test Driven Development Is…• Practical. Testing system qualities can fit into and enhance your current testing.• Thoughtful. What qualities need to be tested? Who should write tests? When should you test for qualities?• Realistic. You only have so much time and energy. Test essential qualities.
Summary• Quality Scenarios are easy to write and read.• Writing quality test scenarios drives out important cross-cutting concerns.• If you don’t pay attention to qualities they can be hard to achieve at the last moment.• Measuring system qualities can require specialized testing/measurement tools.